diff --git a/.github/workflows/pytorch.yml b/.github/workflows/pytorch.yml index c0916d62689..7fa7e6dbf13 100644 --- a/.github/workflows/pytorch.yml +++ b/.github/workflows/pytorch.yml @@ -33,7 +33,7 @@ jobs: - uses: bytedeco/javacpp-presets/.github/actions/deploy-ubuntu@actions timeout-minutes: 350 macosx-arm64: - runs-on: macos-12 + runs-on: macos-14 steps: - uses: bytedeco/javacpp-presets/.github/actions/deploy-macosx@actions macosx-x86_64: diff --git a/CHANGELOG.md b/CHANGELOG.md index bf7e65d7f3d..03da7f5cec2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,5 @@ + * Enable distributed package using Gloo in presets for PyTorch ([pull #1510](https://github.com/bytedeco/javacpp-presets/pull/1510)) * Add presets for the CUPTI module of CUDA ([pull #1531](https://github.com/bytedeco/javacpp-presets/pull/1531)) * Add new `ClangMemoryMgmtExample` in samples for LLVM ([pull #1522](https://github.com/bytedeco/javacpp-presets/pull/1522)) * Enable `opencv_python3` module for `macosx-arm64` as well ([pull #1517](https://github.com/bytedeco/javacpp-presets/pull/1517)) @@ -8,7 +9,7 @@ * Build FFmpeg with zimg to enable zscale filter ([pull #1481](https://github.com/bytedeco/javacpp-presets/pull/1481)) * Enable PulseAudio support for FFmpeg on Linux ([pull #1472](https://github.com/bytedeco/javacpp-presets/pull/1472)) * Virtualize `btCollisionWorld`, `btOverlapFilterCallback`, `btOverlapCallback` from Bullet Physics SDK ([pull #1475](https://github.com/bytedeco/javacpp-presets/pull/1475)) - * Upgrade presets for OpenCV 4.10.0, FFmpeg 7.0.2, Spinnaker 4.0.0.116 ([pull #1524](https://github.com/bytedeco/javacpp-presets/pull/1524)), DNNL 3.5.3, OpenBLAS 0.3.28, CMINPACK 1.3.9, GSL 2.8, CPython 3.12.5, NumPy 2.0.1, SciPy 1.14.0, LLVM 18.1.8, LibRaw 0.21.2 ([pull #1520](https://github.com/bytedeco/javacpp-presets/pull/1520)), Tesseract 5.4.1, libffi 3.4.6, CUDA 12.6.0, cuDNN 9.3.0, NCCL 2.22.3, nvCOMP 4.0.0, OpenCL 3.0.16, NVIDIA Video Codec SDK 12.2.72, PyTorch 2.3.0 ([pull #1466](https://github.com/bytedeco/javacpp-presets/pull/1466)), SentencePiece 0.2.0, TensorFlow Lite 2.17.0, TensorRT 10.3.0.26, Triton Inference Server 2.48.0, ONNX 1.16.2, ONNX Runtime 1.18.1, TVM 0.17.0, and their dependencies + * Upgrade presets for OpenCV 4.10.0, FFmpeg 7.0.2, Spinnaker 4.0.0.116 ([pull #1524](https://github.com/bytedeco/javacpp-presets/pull/1524)), DNNL 3.5.3, OpenBLAS 0.3.28, CMINPACK 1.3.9, GSL 2.8, CPython 3.12.5, NumPy 2.0.1, SciPy 1.14.0, LLVM 18.1.8, LibRaw 0.21.2 ([pull #1520](https://github.com/bytedeco/javacpp-presets/pull/1520)), Tesseract 5.4.1, libffi 3.4.6, CUDA 12.6.0, cuDNN 9.3.0, NCCL 2.22.3, nvCOMP 4.0.0, OpenCL 3.0.16, NVIDIA Video Codec SDK 12.2.72, PyTorch 2.4.0 ([pull #1466](https://github.com/bytedeco/javacpp-presets/pull/1466)), SentencePiece 0.2.0, TensorFlow Lite 2.17.0, TensorRT 10.3.0.26, Triton Inference Server 2.48.0, ONNX 1.16.2, ONNX Runtime 1.18.1, TVM 0.17.0, and their dependencies ### January 29, 2024 version 1.5.10 * Introduce `macosx-arm64` builds for PyTorch ([pull #1463](https://github.com/bytedeco/javacpp-presets/pull/1463)) diff --git a/README.md b/README.md index 5e94db787db..5be8c141bdc 100644 --- a/README.md +++ b/README.md @@ -223,7 +223,7 @@ Each child module in turn relies by default on the included [`cppbuild.sh` scrip * NVIDIA Video Codec SDK 12.2.x https://developer.nvidia.com/nvidia-video-codec-sdk * OpenCL 3.0.x https://github.com/KhronosGroup/OpenCL-ICD-Loader * MXNet 1.9.x https://github.com/apache/incubator-mxnet - * PyTorch 2.3.x https://github.com/pytorch/pytorch + * PyTorch 2.4.x https://github.com/pytorch/pytorch * SentencePiece 0.2.0 https://github.com/google/sentencepiece * TensorFlow 1.15.x https://github.com/tensorflow/tensorflow * TensorFlow Lite 2.17.x https://github.com/tensorflow/tensorflow diff --git a/platform/pom.xml b/platform/pom.xml index 0805036ba00..1f42863c6cc 100644 --- a/platform/pom.xml +++ b/platform/pom.xml @@ -292,7 +292,7 @@ org.bytedeco pytorch-platform - 2.3.0-${project.version} + 2.4.0-${project.version} org.bytedeco diff --git a/pytorch/README.md b/pytorch/README.md index 8b28b87b9a7..e5cccb5525b 100644 --- a/pytorch/README.md +++ b/pytorch/README.md @@ -9,7 +9,7 @@ Introduction ------------ This directory contains the JavaCPP Presets module for: - * PyTorch 2.3.0 https://pytorch.org/ + * PyTorch 2.4.0 https://pytorch.org/ Please refer to the parent README.md file for more detailed information about the JavaCPP Presets. @@ -48,14 +48,14 @@ We can use [Maven 3](http://maven.apache.org/) to download and install automatic org.bytedeco pytorch-platform - 2.3.0-1.5.11-SNAPSHOT + 2.4.0-1.5.11-SNAPSHOT org.bytedeco pytorch-platform-gpu - 2.3.0-1.5.11-SNAPSHOT + 2.4.0-1.5.11-SNAPSHOT diff --git a/pytorch/cppbuild.sh b/pytorch/cppbuild.sh index 1d805c3af39..ee70d1d7c66 100755 --- a/pytorch/cppbuild.sh +++ b/pytorch/cppbuild.sh @@ -22,6 +22,9 @@ export USE_CUDNN=0 export USE_NUMPY=0 export USE_OPENMP=1 export USE_SYSTEM_NCCL=1 +export USE_DISTRIBUTED=1 +export USE_NCCL=0 # Not supported on Windows + if [[ "$EXTENSION" == *gpu ]]; then export USE_CUDA=1 export USE_CUDNN=1 @@ -35,7 +38,7 @@ if [[ $PLATFORM == windows* ]]; then export PYTHON_BIN_PATH=$(which python.exe) fi -PYTORCH_VERSION=2.3.0 +PYTORCH_VERSION=2.4.0 export PYTORCH_BUILD_VERSION="$PYTORCH_VERSION" export PYTORCH_BUILD_NUMBER=1 @@ -44,6 +47,23 @@ mkdir -p "$PLATFORM$EXTENSION" cd "$PLATFORM$EXTENSION" INSTALL_PATH=`pwd` +# Distributed needs libuv on Windows (on other platforms, it's included in tensorpipe) +if [[ $PLATFORM == windows* ]]; then + if [[ ! -d libuv ]]; then + mkdir libuv + cd libuv + download https://dist.libuv.org/dist/v1.39.0/libuv-v1.39.0.tar.gz libuv.tgz + tar xfz libuv.tgz + mkdir build + cd build + cmake ../libuv-v1.39.0 -DBUILD_TESTING=OFF + cmake --build . --config Release + cmake --install . --config Release --prefix ../dist + cd ../.. + fi + export libuv_ROOT=${INSTALL_PATH}/libuv/dist +fi + if [[ ! -d pytorch ]]; then git clone https://github.com/pytorch/pytorch fi @@ -123,7 +143,7 @@ case $PLATFORM in macosx-arm64) export CC="clang" export CXX="clang++" - export CMAKE_OSX_ARCHITECTURES=arm64 # enable cross-compilation on a x86_64 host machine + # export PATH=$(brew --prefix llvm@18)/bin:$PATH # Use brew LLVM instead of Xcode LLVM 14 export USE_MKLDNN=OFF export USE_QNNPACK=OFF # not compatible with arm64 as of PyTorch 2.1.2 export CMAKE_OSX_DEPLOYMENT_TARGET=11.00 # minimum needed for arm64 support @@ -131,6 +151,8 @@ case $PLATFORM in macosx-x86_64) export CC="clang" export CXX="clang++" + export USE_MKLDNN=OFF + # export PATH=$(brew --prefix llvm@18)/bin:$PATH # Use brew LLVM instead of Xcode LLVM 14 ;; windows-x86_64) if which ccache.exe; then @@ -181,22 +203,53 @@ TORCH_API std::ostream& operator<<(std::ostream& stream, const nn::Module& modul ' torch/csrc/api/include/torch/nn/module.h sedinplace 's/char(\(.*\))/\1/g' torch/csrc/jit/serialization/pickler.h +# some windows header defines a macro named "interface" +sedinplace 's/const std::string& interface)/const std::string\& interface_name)/g' torch/csrc/distributed/c10d/ProcessGroupGloo.hpp + +# fix missing #include (Pytorch 2.4.0) +sedinplace 's/#include /#include \ +#include \ +#include /' torch/csrc/distributed/c10d/control_plane/Handlers.cpp + +# Remove pytorch adaptations of FindOpenMP.cmake that. +# On Windows without iomp and with new versions of VS 2019, including -openmp:experimental and libomp, causes +# final binary to be linked to both libomp and vcomp and produce incorrect results. +# Wait for eventual upstream fix, or for cmake 2.30 that allows to choose between -openmp and -openmp:experimental +# and see if choosing experimental works. See Issue #1503. +# On Linux, pytorch FindOpenMP.cmake picks llvm libomp over libgomp. See Issue #1504. +# On MacOS CMake standard version works tooL +rm cmake/Modules/FindOpenMP.cmake +sedinplace 's/include(${CMAKE_CURRENT_LIST_DIR}\/Modules\/FindOpenMP.cmake)/find_package(OpenMP)/g' cmake/Dependencies.cmake + #USE_FBGEMM=0 USE_KINETO=0 USE_GLOO=0 USE_MKLDNN=0 \ "$PYTHON_BIN_PATH" setup.py build rm -Rf ../lib +if [[ ! -e torch/include/gloo ]]; then + ln -sf ../../third_party/gloo/gloo torch/include +fi ln -sf pytorch/torch/include ../include ln -sf pytorch/torch/lib ../lib ln -sf pytorch/torch/bin ../bin -# fix library with correct rpath on Mac case $PLATFORM in macosx-*) - cp /usr/local/lib/libomp.dylib ../lib/libiomp5.dylib + # Disguise libomp as libiomp5 (they share the same codebase and have the same symbols) + # This helps if user wants to link with MKL. + # On linux, user linking with mkl would need to set + # MKL_THREADING_LAYER=GNU + cp "$(brew ls libomp|grep libomp.dylib)" ../lib/libiomp5.dylib chmod +w ../lib/libiomp5.dylib install_name_tool -id @rpath/libiomp5.dylib ../lib/libiomp5.dylib - install_name_tool -change @rpath/libomp.dylib @rpath/libiomp5.dylib ../lib/libtorch_cpu.dylib + codesign --force -s - ../lib/libiomp5.dylib + old=$(otool -L ../lib/libtorch_cpu.dylib|grep libomp.dylib|awk '{print $1}') + echo install_name_tool -change $old @rpath/libiomp5.dylib ../lib/libtorch_cpu.dylib + install_name_tool -change $old @rpath/libiomp5.dylib ../lib/libtorch_cpu.dylib + codesign --force -s - ../lib/libtorch_cpu.dylib ;; + windows-*) + cp ../libuv/dist/lib/Release/* ../lib + ;; esac cd ../.. diff --git a/pytorch/include_list.pl b/pytorch/include_list.pl index a91ad04e216..5c01133ab15 100644 --- a/pytorch/include_list.pl +++ b/pytorch/include_list.pl @@ -18,7 +18,7 @@ ($) for (my $d = @inc_per_depth - 1; $d >= $min_depth; $d--) { if ($inc_per_depth[$d]) { foreach my $i (@{$inc_per_depth[$d]}) { - print "#include \"$i\"\n"; + print "#include \"$i\"\n" unless $incs{$i}; $incs{$i} = 1; } undef $inc_per_depth[$d]; @@ -27,12 +27,20 @@ ($) } sub go { - my $path = join ' ', @_; + my ($roots, $opts) = @_; + my $path = join ' ', @$roots, @$opts; + + my $exe = "g++ -I. -I torch/csrc/api/include/ -DUSE_UCC -DUSE_C10D_GLOO -DUSE_C10D_MPI -DUSE_DISTRIBUTED -H $path -E 2>&1 > /dev/null"; + #my $exe = "g++ -I. -I torch/csrc/api/include/ -DUSE_UCC -DUSE_C10D_GLOO -DUSE_C10D_MPI -DUSE_DISTRIBUTED -D_WIN32 -H $path -E 2>&1 > /dev/null"; + my @inc = `$exe`; + if ($? != 0) { + print STDERR "Failed:\n$exe\nError: $?: $!\n"; + exit $?; + } - my @inc = `g++ -I. -I torch/csrc/api/include/ -H $path -E 2>&1 > /dev/null`; foreach my $i (@inc) { chomp $i; - my ($depth, $f) = $i =~ /^(\.+)\s(.*\.h)$/; + my ($depth, $f) = $i =~ /^(\.+)\s(.*\.h(?:pp)?)$/; next unless $depth; $depth = length($depth); $f =~ s#^\./##; @@ -48,18 +56,33 @@ sub go { push @$incs, $f; } flush(0); + foreach my $i (@$roots) { + print "#include \"$i\"\n" unless $incs{$i}; + $incs{$i} = 1; + } } chdir "cppbuild/linux-x86_64-gpu/pytorch/torch/include"; -go('torch/csrc/api/include/torch/torch.h', 'torch/script.h', 'torch/csrc/inductor/aoti_runner/model_container_runner_cpu.h'); +print <org.bytedeco pytorch-platform-gpu - 2.3.0-${project.parent.version} + 2.4.0-${project.parent.version} JavaCPP Presets Platform GPU for PyTorch diff --git a/pytorch/platform/pom.xml b/pytorch/platform/pom.xml index 7ebc0809dc6..a3ab5725b2d 100644 --- a/pytorch/platform/pom.xml +++ b/pytorch/platform/pom.xml @@ -12,7 +12,7 @@ org.bytedeco pytorch-platform - 2.3.0-${project.parent.version} + 2.4.0-${project.parent.version} JavaCPP Presets Platform for PyTorch @@ -41,6 +41,12 @@ ${project.version} ${javacpp.platform.linux-x86_64} + + ${project.groupId} + ${javacpp.moduleId} + ${project.version} + ${javacpp.platform.macosx-arm64} + ${project.groupId} ${javacpp.moduleId} @@ -65,7 +71,7 @@ - ${javacpp.moduleId}.jar ${javacpp.moduleId}-linux-x86_64.jar ${javacpp.moduleId}-macosx-x86_64.jar ${javacpp.moduleId}-windows-x86_64.jar + ${javacpp.moduleId}.jar ${javacpp.moduleId}-linux-x86_64.jar ${javacpp.moduleId}-macosx-arm64.jar ${javacpp.moduleId}-macosx-x86_64.jar ${javacpp.moduleId}-windows-x86_64.jar @@ -111,6 +117,7 @@ module org.bytedeco.${javacpp.moduleId}.platform { requires static org.bytedeco.${javacpp.moduleId}.linux.x86_64; + requires static org.bytedeco.${javacpp.moduleId}.macosx.arm64; requires static org.bytedeco.${javacpp.moduleId}.macosx.x86_64; requires static org.bytedeco.${javacpp.moduleId}.windows.x86_64; } diff --git a/pytorch/pom.xml b/pytorch/pom.xml index 8f722424484..9335ad36cd8 100644 --- a/pytorch/pom.xml +++ b/pytorch/pom.xml @@ -11,7 +11,7 @@ org.bytedeco pytorch - 2.3.0-${project.parent.version} + 2.4.0-${project.parent.version} JavaCPP Presets for PyTorch @@ -24,6 +24,12 @@ openblas 0.3.28-${project.parent.version} + + org.bytedeco + cuda + 12.6-9.3-${project.parent.version} + true + @@ -43,6 +49,11 @@ openblas-platform 0.3.28-${project.parent.version} + + org.bytedeco + cuda-platform + 12.6-9.3-${project.parent.version} + org.bytedeco numpy-platform @@ -60,6 +71,7 @@ ${basedir}/../openblas/target/classes/ ${basedir}/../cpython/target/classes/ ${basedir}/../numpy/target/classes/ + ${basedir}/../cuda/target/classes/ ${project.build.outputDirectory} diff --git a/pytorch/samples/pom.xml b/pytorch/samples/pom.xml index e22da3ab5b5..ef136d7088d 100644 --- a/pytorch/samples/pom.xml +++ b/pytorch/samples/pom.xml @@ -12,14 +12,14 @@ org.bytedeco pytorch-platform - 2.3.0-1.5.11-SNAPSHOT + 2.4.0-1.5.11-SNAPSHOT org.bytedeco pytorch-platform-gpu - 2.3.0-1.5.11-SNAPSHOT + 2.4.0-1.5.11-SNAPSHOT diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AOTIModelContainerRunner.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AOTIModelContainerRunner.java index 315a3bb11ad..df70a32da98 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AOTIModelContainerRunner.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AOTIModelContainerRunner.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -35,9 +36,9 @@ public class AOTIModelContainerRunner extends Pointer { public native @ByVal ExtraFilesMap getConstantNamesToOriginalFQNs(); public native @ByVal StringIntMap getConstantNamesToDtypes(); - public native void update_inactive_constant_buffer(@Cast("const torch::inductor::TensorConstantMap*") @ByRef HashAliasedIValueMap const_map); + public native void update_inactive_constant_buffer(@Cast("const torch::inductor::TensorConstantMap*") @ByRef SizeTStringMap const_map); public native void update_constant_buffer( - @Cast("const torch::inductor::TensorConstantMap*") @ByRef HashAliasedIValueMap const_map, + @Cast("const torch::inductor::TensorConstantMap*") @ByRef SizeTStringMap const_map, @Cast("bool") boolean use_inactive, @Cast("bool") boolean validate_full_updates); public native void run_const_fold( diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AOTIModelContainerRunnerCpu.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AOTIModelContainerRunnerCpu.java index 245736a92bb..76e7b1fcc4d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AOTIModelContainerRunnerCpu.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AOTIModelContainerRunnerCpu.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ASMoutput.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ASMoutput.java index 16a89281a22..c5401cd7f4d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ASMoutput.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ASMoutput.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AcceleratorHooksInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AcceleratorHooksInterface.java index ba443287ab4..9fb904a24ff 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AcceleratorHooksInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AcceleratorHooksInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -34,4 +35,14 @@ public class AcceleratorHooksInterface extends Pointer { // Whether the device at device_index is fully initialized or not. public native @Cast("bool") boolean hasPrimaryContext(@Cast("c10::DeviceIndex") byte device_index); + + public native @Cast("c10::DeviceIndex") byte deviceCount(); + + public native void setCurrentDevice(@Cast("c10::DeviceIndex") byte device); + + public native @Cast("c10::DeviceIndex") byte getCurrentDevice(); + + public native @Cast("c10::DeviceIndex") byte exchangeDevice(@Cast("c10::DeviceIndex") byte device); + + public native @Cast("c10::DeviceIndex") byte maybeExchangeDevice(@Cast("c10::DeviceIndex") byte device); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ActivityTypeSet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ActivityTypeSet.java index a439f4848b5..a8a5afb0263 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ActivityTypeSet.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ActivityTypeSet.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Adagrad.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Adagrad.java index 01a7f182ea6..c20bfebb1ba 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Adagrad.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Adagrad.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdagradOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdagradOptions.java index 04ff7eeb15d..7e8f958d7a6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdagradOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdagradOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace torch diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdagradParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdagradParamState.java index 359ee78b6c3..17b1c8f7d2b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdagradParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdagradParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Adam.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Adam.java index 9b3ddfe5273..04e03f32b03 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Adam.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Adam.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdamOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdamOptions.java index a466026a740..0117a4e490c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdamOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdamOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace torch diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdamParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdamParamState.java index d944ca047fe..985da3871c8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdamParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdamParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdamW.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdamW.java index 44de72027df..bf27f6b2d6a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdamW.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdamW.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdamWOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdamWOptions.java index e80fb9128bd..85331179736 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdamWOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdamWOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace torch diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdamWParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdamWParamState.java index 16efed25070..efdbb70c953 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdamWParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdamWParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImpl.java index f8c1bb8caf7..445378866ce 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~ AdaptiveAvgPool1d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies adaptive avgpool over a 1-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveAvgPool1d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.AdaptiveAvgPool1d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::AdaptiveAvgPool1dOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImplBase.java index 40b08a1ce43..127191932c6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImplCloneable.java index 48dfd2fbd0c..6cd0d41375f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class AdaptiveAvgPool1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dOptions.java index d5c58a579ad..bb74f77f391 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImpl.java index d810f6bb144..d6d32e09206 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~ AdaptiveAvgPool2d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies adaptive avgpool over a 2-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveAvgPool2d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.AdaptiveAvgPool2d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::AdaptiveAvgPool2dOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImplBase.java index 276f3a01156..c9a9c2155e1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImplCloneable.java index 7ddff12187a..014f7184f1b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class AdaptiveAvgPool2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dOptions.java index e51d5d8c1fc..e3d8cefb60d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImpl.java index 0b4b3af9720..bf7938f53d2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~ AdaptiveAvgPool3d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies adaptive avgpool over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveAvgPool3d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.AdaptiveAvgPool3d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::AdaptiveAvgPool3dOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImplBase.java index 5f1a5b4e6e3..057e7e6e1ad 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImplCloneable.java index 0133810ada4..680a8c29567 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class AdaptiveAvgPool3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dOptions.java index 71020055604..76b2760febd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveAvgPool3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossImpl.java index eda1d3583fc..9231dc376db 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,7 +26,7 @@ * {@code Efficient softmax approximation for GPUs}_ by Edouard Grave, Armand Joulin, * Moustapha Cissé, David Grangier, and Hervé Jégou. * See - * https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveLogSoftmaxWithLoss + * https://pytorch.org/docs/main/nn.html#torch.nn.AdaptiveLogSoftmaxWithLoss * to learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::AdaptiveLogSoftmaxWithLossOptions} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossImplCloneable.java index af51f4e06e1..a1dfcbd3a2d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class AdaptiveLogSoftmaxWithLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossOptions.java index ca1256eafa8..29188bd2c2c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveLogSoftmaxWithLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImpl.java index 81010dd00cc..ab941e60976 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~ AdaptiveMaxPool1d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies adaptive maxpool over a 1-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveMaxPool1d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.AdaptiveMaxPool1d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::AdaptiveMaxPool1dOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImplBase.java index 495fa7d06c0..2b04569bc1a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImplCloneable.java index 418391fb82f..7182a820e06 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class AdaptiveMaxPool1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dOptions.java index 0b41b14b6ee..20cbb8d016a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImpl.java index 1843987daea..dbf4b012431 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ AdaptiveMaxPool2d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies adaptive maxpool over a 2-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveMaxPool2d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.AdaptiveMaxPool2d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::AdaptiveMaxPool2dOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImplBase.java index 273ea117a34..068bd073015 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImplCloneable.java index c524e3d8712..1145fd54eb9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class AdaptiveMaxPool2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dOptions.java index 2b9b8f5b633..97ffbd0f81f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImpl.java index ef3cec1183d..b7848ec8740 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ AdaptiveMaxPool3d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies adaptive maxpool over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveMaxPool3d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.AdaptiveMaxPool3d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::AdaptiveMaxPool3dOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImplBase.java index 85d7563ae1e..9282af1448f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImplCloneable.java index 0815e37d3c7..c73ce111401 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class AdaptiveMaxPool3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dOptions.java index d52a037fca0..14d5e25ba4c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AdaptiveMaxPool3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AliasDb.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AliasDb.java index 8595ece7434..ee525ef48bc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AliasDb.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AliasDb.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace utils diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AliasInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AliasInfo.java index f8c3f056ccc..0646d0a1f3c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AliasInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AliasInfo.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AliasInfoOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AliasInfoOptional.java index c91646faf20..a9346b6a9e4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AliasInfoOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AliasInfoOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class AliasInfoOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AliasTypeSetOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AliasTypeSetOptional.java index 0e80109be7f..506d71476c0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AliasTypeSetOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AliasTypeSetOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class AliasTypeSetOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AllToAllOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AllToAllOptions.java new file mode 100644 index 00000000000..fdba5130684 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AllToAllOptions.java @@ -0,0 +1,41 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class AllToAllOptions extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public AllToAllOptions() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public AllToAllOptions(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public AllToAllOptions(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public AllToAllOptions position(long position) { + return (AllToAllOptions)super.position(position); + } + @Override public AllToAllOptions getPointer(long i) { + return new AllToAllOptions((Pointer)this).offsetAddress(i); + } + + public native @ByRef Milliseconds timeout(); public native AllToAllOptions timeout(Milliseconds setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AllgatherOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AllgatherOptions.java new file mode 100644 index 00000000000..f980224301f --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AllgatherOptions.java @@ -0,0 +1,42 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class AllgatherOptions extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public AllgatherOptions() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public AllgatherOptions(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public AllgatherOptions(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public AllgatherOptions position(long position) { + return (AllgatherOptions)super.position(position); + } + @Override public AllgatherOptions getPointer(long i) { + return new AllgatherOptions((Pointer)this).offsetAddress(i); + } + + public native @ByRef Milliseconds timeout(); public native AllgatherOptions timeout(Milliseconds setter); + public native @Cast("bool") boolean asyncOp(); public native AllgatherOptions asyncOp(boolean setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Allocator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Allocator.java index f6dc7870f9d..a9ae2b30865 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Allocator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Allocator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DontIncreaseRefcount.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AllreduceCoalescedOptions.java similarity index 66% rename from pytorch/src/gen/java/org/bytedeco/pytorch/DontIncreaseRefcount.java rename to pytorch/src/gen/java/org/bytedeco/pytorch/AllreduceCoalescedOptions.java index ea5a7e645bd..3d8e2f2dc43 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DontIncreaseRefcount.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AllreduceCoalescedOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,15 +13,16 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -// constructor tag used by intrusive_ptr constructors -@Namespace("c10::raw") @Opaque @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class DontIncreaseRefcount extends Pointer { +@Namespace("c10d") @Opaque @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class AllreduceCoalescedOptions extends AllreduceOptions { /** Empty constructor. Calls {@code super((Pointer)null)}. */ - public DontIncreaseRefcount() { super((Pointer)null); } + public AllreduceCoalescedOptions() { super((Pointer)null); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public DontIncreaseRefcount(Pointer p) { super(p); } + public AllreduceCoalescedOptions(Pointer p) { super(p); } } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AllreduceOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AllreduceOptions.java new file mode 100644 index 00000000000..f44646cc98f --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AllreduceOptions.java @@ -0,0 +1,43 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class AllreduceOptions extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public AllreduceOptions() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public AllreduceOptions(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public AllreduceOptions(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public AllreduceOptions position(long position) { + return (AllreduceOptions)super.position(position); + } + @Override public AllreduceOptions getPointer(long i) { + return new AllreduceOptions((Pointer)this).offsetAddress(i); + } + + public native @ByRef @NoOffset ReduceOp reduceOp(); public native AllreduceOptions reduceOp(ReduceOp setter); + public native @ByRef @NoOffset Milliseconds timeout(); public native AllreduceOptions timeout(Milliseconds setter); + public native @ByRef @NoOffset TensorOptional sparseIndices(); public native AllreduceOptions sparseIndices(TensorOptional setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutFuncOptions.java index 13a6855649f..45001967ae0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImpl.java index 8aa6f8c4e9b..9f91172110c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ AlphaDropout ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies Alpha Dropout over the input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.AlphaDropout to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.AlphaDropout to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::AlphaDropoutOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImplBase.java index 18fab886e05..7ef9562d602 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImplCloneable.java index 859ea089357..db38d7884c4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AlphaDropoutImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class AlphaDropoutImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnomalyMetadata.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnomalyMetadata.java index 78e2a22497e..c2884e536c8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnomalyMetadata.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnomalyMetadata.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnomalyMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnomalyMode.java index c5574cb4df9..858438ff56b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnomalyMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnomalyMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyClassType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyClassType.java index 31dc3152958..c8e8cf4c960 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyClassType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyClassType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyClassTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyClassTypePtr.java index 16a3faca35f..42c6165e3f7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyClassTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyClassTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyEnumType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyEnumType.java index d3312cd32db..38cd010e51b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyEnumType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyEnumType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyEnumTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyEnumTypePtr.java index 7ff7901fdf4..f7eee44556e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyEnumTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyEnumTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyListType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyListType.java index a039e827418..b99bb58cc8c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyListType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyListType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyListTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyListTypePtr.java index f25a6d33ee6..fc0fb12f3dd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyListTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyListTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyModule.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyModule.java index 3f97cc39eec..3991d3564c1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyModule.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyModule.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -390,7 +391,7 @@ public class AnyModule extends Pointer { /** Creates a deep copy of an {@code AnyModule} if it contains a module, else an * empty {@code AnyModule} if it is empty. */ - public native @ByVal AnyModule clone(@ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + public native @ByVal AnyModule clone(@ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @ByVal AnyModule clone(); /** Assigns a module to the {@code AnyModule} (to circumvent the explicit @@ -406,9 +407,9 @@ public class AnyModule extends Pointer { public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4); public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6); public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6, @Const @ByRef Tensor input7, @Const @ByRef Tensor input8); - public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @ByRef(nullValue = "c10::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); - public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") LongArrayRefOptional output_size); - public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @Const @ByRef Tensor indices, @Const @ByRef(nullValue = "c10::optional >(c10::nullopt)") LongVectorOptional output_size); + public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @ByRef(nullValue = "std::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); + public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @Const @ByRef(nullValue = "std::optional(c10::nullopt)") LongArrayRefOptional output_size); + public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @Const @ByRef Tensor indices, @Const @ByRef(nullValue = "std::optional >(c10::nullopt)") LongVectorOptional output_size); public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @ByVal(nullValue = "torch::optional >{}") T_TensorTensor_TOptional hx_opt); public native @ByVal AnyValue any_forward(@Const @ByRef Tensor query, @Const @ByRef Tensor key, @Const @ByRef Tensor value, @Const @ByRef(nullValue = "torch::Tensor{}") Tensor key_padding_mask, @Cast("bool") boolean need_weights/*=true*/, @Const @ByRef(nullValue = "torch::Tensor{}") Tensor attn_mask, @Cast("bool") boolean average_attn_weights/*=true*/); @@ -421,9 +422,9 @@ public class AnyModule extends Pointer { public native @ByVal Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4); public native @ByVal Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6); public native @ByVal Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6, @Const @ByRef Tensor input7, @Const @ByRef Tensor input8); - public native @ByVal Tensor forward(@Const @ByRef Tensor input, @ByRef(nullValue = "c10::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); - public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") LongArrayRefOptional output_size); - public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef Tensor indices, @Const @ByRef(nullValue = "c10::optional >(c10::nullopt)") LongVectorOptional output_size); + public native @ByVal Tensor forward(@Const @ByRef Tensor input, @ByRef(nullValue = "std::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); + public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef(nullValue = "std::optional(c10::nullopt)") LongArrayRefOptional output_size); + public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef Tensor indices, @Const @ByRef(nullValue = "std::optional >(c10::nullopt)") LongVectorOptional output_size); public native @ByVal @Name("forward>>") T_TensorT_TensorTensor_T_T forwardT_TensorT_TensorTensor_T_T(@Const @ByRef Tensor input); public native @ByVal @Name("forward>>") T_TensorT_TensorTensor_T_T forwardT_TensorT_TensorTensor_T_T(@Const @ByRef Tensor input, @ByVal(nullValue = "torch::optional >{}") T_TensorTensor_TOptional hx_opt); public native @ByVal @Name("forward>") T_TensorTensor_T forwardT_TensorTensor_T(@Const @ByRef Tensor input); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyModuleVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyModuleVector.java index a386153cdd3..f4106958483 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyModuleVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyModuleVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTupleType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTupleType.java index 65d89929367..0e6cf7df191 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTupleType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTupleType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTupleTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTupleTypePtr.java index 66ff484d421..c3542add493 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTupleTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTupleTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyType.java index 19a4af193fa..56793933e85 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTypePtr.java index da7ffc8590e..ae897e5d258 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyValue.java index 7af1d36c38f..ddd85274423 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AnyValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AnyValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Apply.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Apply.java index d24b10f427d..03fdf06401d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Apply.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Apply.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Apply extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Apply(Pointer p) { super(p); } - public Apply(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Apply(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr callee(); public native @ByVal ExprList inputs(); public native @ByVal AttributeList attributes(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/ApproximateClockToUnixTimeConverter.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ApproximateClockToUnixTimeConverter.java similarity index 86% rename from pytorch/src/gen/java/org/bytedeco/pytorch/cuda/ApproximateClockToUnixTimeConverter.java rename to pytorch/src/gen/java/org/bytedeco/pytorch/ApproximateClockToUnixTimeConverter.java index af3bd4b9195..6162e8b2a4b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/ApproximateClockToUnixTimeConverter.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ApproximateClockToUnixTimeConverter.java @@ -1,14 +1,11 @@ // Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE -package org.bytedeco.pytorch.cuda; +package org.bytedeco.pytorch; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; import org.bytedeco.javacpp.*; import org.bytedeco.javacpp.annotation.*; @@ -16,14 +13,14 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; -import org.bytedeco.pytorch.*; -import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; -import static org.bytedeco.pytorch.global.torch_cuda.*; +import static org.bytedeco.pytorch.global.torch.*; // Convert `getCount` results to Nanoseconds since unix epoch. -@Namespace("c10") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch_cuda.class) +@Namespace("c10") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ApproximateClockToUnixTimeConverter extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Argument.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Argument.java index 063e9d0b520..9a807c22489 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Argument.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Argument.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -36,50 +37,50 @@ public class Argument extends Pointer { public Argument( @StdString BytePointer name/*=""*/, @Const @ByRef(nullValue = "c10::TypePtr(nullptr)") Type.TypePtr type, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IntOptional N, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IValueOptional default_value, + @ByVal(nullValue = "std::optional(c10::nullopt)") IntOptional N, + @ByVal(nullValue = "std::optional(c10::nullopt)") IValueOptional default_value, @Cast("bool") boolean kwarg_only/*=false*/, - @ByVal(nullValue = "c10::optional(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, type, N, default_value, kwarg_only, alias_info); } + @ByVal(nullValue = "std::optional(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, type, N, default_value, kwarg_only, alias_info); } private native void allocate( @StdString BytePointer name/*=""*/, @Const @ByRef(nullValue = "c10::TypePtr(nullptr)") Type.TypePtr type, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IntOptional N, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IValueOptional default_value, + @ByVal(nullValue = "std::optional(c10::nullopt)") IntOptional N, + @ByVal(nullValue = "std::optional(c10::nullopt)") IValueOptional default_value, @Cast("bool") boolean kwarg_only/*=false*/, - @ByVal(nullValue = "c10::optional(c10::nullopt)") AliasInfoOptional alias_info); + @ByVal(nullValue = "std::optional(c10::nullopt)") AliasInfoOptional alias_info); public Argument() { super((Pointer)null); allocate(); } private native void allocate(); public Argument( @StdString String name/*=""*/, @Const @ByRef(nullValue = "c10::TypePtr(nullptr)") Type.TypePtr type, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IntOptional N, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IValueOptional default_value, + @ByVal(nullValue = "std::optional(c10::nullopt)") IntOptional N, + @ByVal(nullValue = "std::optional(c10::nullopt)") IValueOptional default_value, @Cast("bool") boolean kwarg_only/*=false*/, - @ByVal(nullValue = "c10::optional(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, type, N, default_value, kwarg_only, alias_info); } + @ByVal(nullValue = "std::optional(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, type, N, default_value, kwarg_only, alias_info); } private native void allocate( @StdString String name/*=""*/, @Const @ByRef(nullValue = "c10::TypePtr(nullptr)") Type.TypePtr type, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IntOptional N, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IValueOptional default_value, + @ByVal(nullValue = "std::optional(c10::nullopt)") IntOptional N, + @ByVal(nullValue = "std::optional(c10::nullopt)") IValueOptional default_value, @Cast("bool") boolean kwarg_only/*=false*/, - @ByVal(nullValue = "c10::optional(c10::nullopt)") AliasInfoOptional alias_info); + @ByVal(nullValue = "std::optional(c10::nullopt)") AliasInfoOptional alias_info); public Argument( @StdString BytePointer name, @ByVal Type.TypePtr fake_type, @ByVal Type.TypePtr real_type, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IntOptional N, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IValueOptional default_value, + @ByVal(nullValue = "std::optional(c10::nullopt)") IntOptional N, + @ByVal(nullValue = "std::optional(c10::nullopt)") IValueOptional default_value, @Cast("bool") boolean kwarg_only/*=false*/, - @ByVal(nullValue = "c10::optional(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, fake_type, real_type, N, default_value, kwarg_only, alias_info); } + @ByVal(nullValue = "std::optional(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, fake_type, real_type, N, default_value, kwarg_only, alias_info); } private native void allocate( @StdString BytePointer name, @ByVal Type.TypePtr fake_type, @ByVal Type.TypePtr real_type, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IntOptional N, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IValueOptional default_value, + @ByVal(nullValue = "std::optional(c10::nullopt)") IntOptional N, + @ByVal(nullValue = "std::optional(c10::nullopt)") IValueOptional default_value, @Cast("bool") boolean kwarg_only/*=false*/, - @ByVal(nullValue = "c10::optional(c10::nullopt)") AliasInfoOptional alias_info); + @ByVal(nullValue = "std::optional(c10::nullopt)") AliasInfoOptional alias_info); public Argument( @StdString BytePointer name, @ByVal Type.TypePtr fake_type, @@ -92,18 +93,18 @@ public Argument( @StdString String name, @ByVal Type.TypePtr fake_type, @ByVal Type.TypePtr real_type, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IntOptional N, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IValueOptional default_value, + @ByVal(nullValue = "std::optional(c10::nullopt)") IntOptional N, + @ByVal(nullValue = "std::optional(c10::nullopt)") IValueOptional default_value, @Cast("bool") boolean kwarg_only/*=false*/, - @ByVal(nullValue = "c10::optional(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, fake_type, real_type, N, default_value, kwarg_only, alias_info); } + @ByVal(nullValue = "std::optional(c10::nullopt)") AliasInfoOptional alias_info) { super((Pointer)null); allocate(name, fake_type, real_type, N, default_value, kwarg_only, alias_info); } private native void allocate( @StdString String name, @ByVal Type.TypePtr fake_type, @ByVal Type.TypePtr real_type, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IntOptional N, - @ByVal(nullValue = "c10::optional(c10::nullopt)") IValueOptional default_value, + @ByVal(nullValue = "std::optional(c10::nullopt)") IntOptional N, + @ByVal(nullValue = "std::optional(c10::nullopt)") IValueOptional default_value, @Cast("bool") boolean kwarg_only/*=false*/, - @ByVal(nullValue = "c10::optional(c10::nullopt)") AliasInfoOptional alias_info); + @ByVal(nullValue = "std::optional(c10::nullopt)") AliasInfoOptional alias_info); public Argument( @StdString String name, @ByVal Type.TypePtr fake_type, @@ -136,7 +137,7 @@ private native void allocate( public native @StdString BytePointer formatTypeMismatchMsg(@StdString BytePointer actual_type); public native @StdString String formatTypeMismatchMsg(@StdString String actual_type); - public native @ByVal Argument cloneWithType(@ByVal Type.TypePtr new_type); + public native @ByVal Argument cloneWithType(@Const @ByRef Type.TypePtr new_type); // this function checks whether this Argument is backward compatible with // the old one. we consider the following cases are backward compatible: diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentArrayRef.java index ace579e04af..ccf782c4694 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentDef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentDef.java index 91235fdfa60..bece5c5c64c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentDef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentDef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentDefArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentDefArrayRef.java index 7d08eb14fa8..e4d0c7d4800 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentDefArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentDefArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentInfo.java index 7596de57fbe..4161ba4506b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentInfo.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpec.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpec.java index 51f1009575c..c6a2e2c372d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpec.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpec.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpecCreator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpecCreator.java index 04e02c3e52b..ea468cd8848 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpecCreator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpecCreator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpecExecutionPlanMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpecExecutionPlanMap.java index c08603e1f7c..0e417c965aa 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpecExecutionPlanMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentSpecExecutionPlanMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Assert.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Assert.java index abd978952b7..0c5a9ee876e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Assert.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Assert.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Assert extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Assert(Pointer p) { super(p); } - public Assert(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Assert(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr test(); public native @ByVal ExprMaybe msg(); public static native @ByVal Assert create( diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Assign.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Assign.java index b708332cc3c..77b54b7538c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Assign.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Assign.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Assign extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Assign(Pointer p) { super(p); } - public Assign(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Assign(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public static native @ByVal Assign create( @Const @ByRef SourceRange range, @Const @ByRef ExprList lhs, diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AssignList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AssignList.java index 68282742cdf..a031d5c9a2c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AssignList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AssignList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class AssignList extends TreeView { public AssignList(Pointer p) { super(p); } - public AssignList(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public AssignList(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal @Cast("torch::jit::List::iterator*") AssignListIterator begin(); public native @ByVal @Cast("torch::jit::List::iterator*") AssignListIterator end(); public native @Cast("bool") boolean empty(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AssignListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AssignListIterator.java index ad6633bba4b..2c568a7c7f2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AssignListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AssignListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class AssignListIterator extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public AssignListIterator(Pointer p) { super(p); } - public AssignListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it) { super((Pointer)null); allocate(it); } - private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it); + public AssignListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it) { super((Pointer)null); allocate(it); } + private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it); public native @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef AssignListIterator rhs); public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef AssignListIterator rhs); public native @ByVal @Name("operator *") Assign multiply(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AssignListMaybe.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AssignListMaybe.java index 1387fb99e9e..7a054439ee6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AssignListMaybe.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AssignListMaybe.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class AssignListMaybe extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public AssignListMaybe(Pointer p) { super(p); } - public AssignListMaybe(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public AssignListMaybe(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); /* implicit */ public AssignListMaybe(@Const @ByRef AssignList tree) { super((Pointer)null); allocate(tree); } private native void allocate(@Const @ByRef AssignList tree); public native @Cast("bool") boolean present(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Attribute.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Attribute.java index 27ce7cc23e9..492655c2920 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Attribute.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Attribute.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -28,12 +29,12 @@ public class Attribute extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Attribute(Pointer p) { super(p); } - public Attribute(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Attribute(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Ident name(); public native @ByVal Expr value(); public static native @ByVal Attribute create( @Const @ByRef SourceRange range, @Const @ByRef Ident name, - @Const @ByRef TreeRef value); + @IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree value); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeList.java index 7e44a5c75ba..3a9630d7692 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class AttributeList extends TreeView { public AttributeList(Pointer p) { super(p); } - public AttributeList(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public AttributeList(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal @Cast("torch::jit::List::iterator*") AttributeListIterator begin(); public native @ByVal @Cast("torch::jit::List::iterator*") AttributeListIterator end(); public native @Cast("bool") boolean empty(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeListIterator.java index 86f3a2a863f..e111e3a1626 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class AttributeListIterator extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public AttributeListIterator(Pointer p) { super(p); } - public AttributeListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it) { super((Pointer)null); allocate(it); } - private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it); + public AttributeListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it) { super((Pointer)null); allocate(it); } + private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it); public native @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef AttributeListIterator rhs); public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef AttributeListIterator rhs); public native @ByVal @Name("operator *") Attribute multiply(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AttributePolicy.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AttributePolicy.java index 318e6081a01..a8094cf11ea 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AttributePolicy.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AttributePolicy.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeValue.java index 353b3fbda30..5593ee1d929 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AttributeValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AugAssign.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AugAssign.java index ce606b315ce..3672a778f4c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AugAssign.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AugAssign.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class AugAssign extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public AugAssign(Pointer p) { super(p); } - public AugAssign(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public AugAssign(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public static native @ByVal AugAssign create( @Const @ByRef SourceRange range, @Const @ByRef Expr lhs, diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AugAssignKind.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AugAssignKind.java index 31fb6cdf990..bccfae87888 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AugAssignKind.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AugAssignKind.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,6 +25,6 @@ public class AugAssignKind extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public AugAssignKind(Pointer p) { super(p); } - public AugAssignKind(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public AugAssignKind(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchBelowADInplaceOrView.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchBelowADInplaceOrView.java index 5aeb3816687..dd1141a377e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchBelowADInplaceOrView.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchBelowADInplaceOrView.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchBelowAutograd.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchBelowAutograd.java index b7e076813aa..ac8dba407c3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchBelowAutograd.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchBelowAutograd.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchSkipFunctionalize.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchSkipFunctionalize.java index 4b1c2d77330..a3180ce4ed2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchSkipFunctionalize.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoDispatchSkipFunctionalize.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoFwGradMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoFwGradMode.java index a18572b6266..2271db19fa2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoFwGradMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoFwGradMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoGradMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoGradMode.java index ceb13776019..a12573fb230 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoGradMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoGradMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoNonVariableTypeMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoNonVariableTypeMode.java index 114e70da000..f2ab6ffeb5b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AutoNonVariableTypeMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutoNonVariableTypeMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradCompilerCall.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradCompilerCall.java new file mode 100644 index 00000000000..6a5e475e9f5 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradCompilerCall.java @@ -0,0 +1,50 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("torch::dynamo::autograd") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class AutogradCompilerCall extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public AutogradCompilerCall() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public AutogradCompilerCall(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public AutogradCompilerCall(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public AutogradCompilerCall position(long position) { + return (AutogradCompilerCall)super.position(position); + } + @Override public AutogradCompilerCall getPointer(long i) { + return new AutogradCompilerCall((Pointer)this).offsetAddress(i); + } + + public native void add_size_input(@Const @ByRef SymInt s); + + public native @Cast("size_t") long emplace_hook(@ByRef(true) SafePyObject fn); + + public native @ByRef @NoOffset TensorArgs tensor_args(); public native AutogradCompilerCall tensor_args(TensorArgs setter); + public native @StdVector @NoOffset SizeInput all_size_inputs(); public native AutogradCompilerCall all_size_inputs(SizeInput setter); + public native @ByRef @Cast("std::vector*") @NoOffset LongVector dyn_size_inputs(); public native AutogradCompilerCall dyn_size_inputs(LongVector setter); + + public native @ByRef @NoOffset NodeCalls node_calls(); public native AutogradCompilerCall node_calls(NodeCalls setter); + public native @NoOffset SizeInput.DynType default_dyn_type(); public native AutogradCompilerCall default_dyn_type(SizeInput.DynType setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradContext.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradContext.java index d7cc6f65b8c..18f6e7a5b8d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradContext.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradContext.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaFactory.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaFactory.java index 3581dfb6461..a5ae51a0a4e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaFactory.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaFactory.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaFactoryRegisterer.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaFactoryRegisterer.java index 74e14258664..f0979bd6e8f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaFactoryRegisterer.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaFactoryRegisterer.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaInterface.java index 368bb234eab..e6cc1db93ac 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMetaInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradState.java index de586a29db0..3d1a22d889c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AutogradState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImpl.java index 1937194bbdf..cd9abc598e5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ AvgPool1d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies avgpool over a 1-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.AvgPool1d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.AvgPool1d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::AvgPool1dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImplBase.java index 38ce73924f6..97dc53dedfc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImplCloneable.java index 2c37a94eef8..7dae207aa06 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class AvgPool1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dOptions.java index e816c99a187..da00ecd275f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImpl.java index af2351a0914..cc55427c2d9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ AvgPool2d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies avgpool over a 2-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.AvgPool2d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.AvgPool2d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::AvgPool2dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImplBase.java index b5c44b68be7..0081df083fe 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImplCloneable.java index eeddd68607b..9426de9839a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class AvgPool2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dOptions.java index 88055b7067e..d94d57461bd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImpl.java index 3d88a6167c8..ffe9ef787cd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ AvgPool3d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies avgpool over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.AvgPool3d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.AvgPool3d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::AvgPool3dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImplBase.java index e1221202c68..47ebb774792 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImplCloneable.java index b21dd9618ea..49b36b2077f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class AvgPool3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dOptions.java index 42d125c2084..e6755f12e64 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AvgPool3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Await.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Await.java index 9b8f7335da5..40c7173b29b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Await.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Await.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AwaitPtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AwaitPtr.java deleted file mode 100644 index 23152f289e3..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AwaitPtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class AwaitPtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public AwaitPtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public AwaitPtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public AwaitPtr position(long position) { - return (AwaitPtr)super.position(position); - } - @Override public AwaitPtr getPointer(long i) { - return new AwaitPtr((Pointer)this).offsetAddress(i); - } - - - public AwaitPtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public AwaitPtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public AwaitPtr(Await target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(Await target, @ByVal DontIncreaseRefcount arg1); - - - - public AwaitPtr(@ByRef(true) AwaitPtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) AwaitPtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) AwaitPtr put(@ByRef(true) AwaitPtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) Await get(); - - public native @ByRef @Name("operator *") @NoException(true) Await multiply(); - - public native @Name("operator ->") @NoException(true) Await access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef AwaitPtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) Await release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal AwaitPtr reclaim(Await owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal AwaitPtr reclaim_copy(Await owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal AwaitPtr unsafe_steal_from_new(Await raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal AwaitPtr unsafe_adapt_non_heap_allocated( - Await raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal AwaitPtr unsafe_reclaim_from_nonowning(Await raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AwaitSingleElementType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AwaitSingleElementType.java index f39c07a2a75..f93b2487d9a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AwaitSingleElementType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AwaitSingleElementType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/AwaitType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/AwaitType.java index 702535a3092..a1cd22d31ec 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/AwaitType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/AwaitType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossImpl.java index 302df9666a5..a18cb0fb6bb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ /** Creates a criterion that measures the Binary Cross Entropy * between the target and the output. - * See https://pytorch.org/docs/master/nn.html#torch.nn.BCELoss to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.BCELoss to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::BCELossOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossImplCloneable.java index 6ec0eb00023..6c4b5a0eb53 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class BCELossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossOptions.java index 5a32a0cb0a3..a8f8f0659a4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BCELossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossImpl.java index b4feec62624..a990d43355c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,7 +26,7 @@ * class. This version is more numerically stable than using a plain {@code Sigmoid} * followed by a {@code BCELoss} as, by combining the operations into one layer, * we take advantage of the log-sum-exp trick for numerical stability. - * See https://pytorch.org/docs/master/nn.html#torch.nn.BCEWithLogitsLoss to + * See https://pytorch.org/docs/main/nn.html#torch.nn.BCEWithLogitsLoss to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::BCEWithLogitsLossOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossImplCloneable.java index b4cea865ca5..b1b8dcbec0d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class BCEWithLogitsLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossOptions.java index 27b96ac34f0..000d31327d5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BCEWithLogitsLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BFloat16.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BFloat16.java index f7e51e10ccc..4a93c34caac 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BFloat16.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BFloat16.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BFloat16ArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BFloat16ArrayRef.java index a97a23d3891..301469b4a0f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BFloat16ArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BFloat16ArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BackendMeta.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BackendMeta.java index f7a8ea37cf4..84709cae4ac 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BackendMeta.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BackendMeta.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -31,19 +32,10 @@ public class BackendMeta extends Pointer { static { Loader.load(); } /** Default native constructor. */ public BackendMeta() { super((Pointer)null); allocate(); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public BackendMeta(long size) { super((Pointer)null); allocateArray(size); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public BackendMeta(Pointer p) { super(p); } - private native void allocate(); - private native void allocateArray(long size); - @Override public BackendMeta position(long position) { - return (BackendMeta)super.position(position); - } - @Override public BackendMeta getPointer(long i) { - return new BackendMeta((Pointer)this).offsetAddress(i); - } - - public native @ByVal BackendMetaRef clone( - @Const @ByRef BackendMetaRef ptr); + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(); + + public native @IntrusivePtr("c10::BackendMeta") @Cast({"", "c10::intrusive_ptr&"}) BackendMeta clone( + @IntrusivePtr("c10::BackendMeta") @Cast({"", "c10::intrusive_ptr&"}) BackendMeta ptr); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BackendMetaRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BackendMetaRef.java deleted file mode 100644 index 25a75ec8ca8..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BackendMetaRef.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class BackendMetaRef extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public BackendMetaRef(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public BackendMetaRef(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public BackendMetaRef position(long position) { - return (BackendMetaRef)super.position(position); - } - @Override public BackendMetaRef getPointer(long i) { - return new BackendMetaRef((Pointer)this).offsetAddress(i); - } - - - public BackendMetaRef() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public BackendMetaRef(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public BackendMetaRef(BackendMeta target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(BackendMeta target, @ByVal DontIncreaseRefcount arg1); - - - - public BackendMetaRef(@ByRef(true) BackendMetaRef rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) BackendMetaRef rhs); - - public native @ByRef @Name("operator =") @NoException(true) BackendMetaRef put(@ByRef(true) BackendMetaRef rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) BackendMeta get(); - - public native @ByRef @Name("operator *") @NoException(true) BackendMeta multiply(); - - public native @Name("operator ->") @NoException(true) BackendMeta access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef BackendMetaRef rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) BackendMeta release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal BackendMetaRef reclaim(BackendMeta owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal BackendMetaRef reclaim_copy(BackendMeta owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal BackendMetaRef unsafe_steal_from_new(BackendMeta raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal BackendMetaRef unsafe_adapt_non_heap_allocated( - BackendMeta raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal BackendMetaRef unsafe_reclaim_from_nonowning(BackendMeta raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DistBackendError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Backtrace.java similarity index 63% rename from pytorch/src/gen/java/org/bytedeco/pytorch/DistBackendError.java rename to pytorch/src/gen/java/org/bytedeco/pytorch/Backtrace.java index 99321bfa0d8..aa665fde989 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DistBackendError.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Backtrace.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,16 +13,21 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -// Used for collective communication library errors from the distributed module. -// These turn into DistBackendError when they cross into Python. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class DistBackendError extends DistError { +/** + * Interface for a value that is computed on first access. + */ +@Name("c10::LazyValue") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class Backtrace extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public DistBackendError(Pointer p) { super(p); } + public Backtrace(Pointer p) { super(p); } + + public native @StdString BytePointer get(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BarrierOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BarrierOptions.java new file mode 100644 index 00000000000..fcff36ddb20 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BarrierOptions.java @@ -0,0 +1,43 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class BarrierOptions extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public BarrierOptions() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public BarrierOptions(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public BarrierOptions(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public BarrierOptions position(long position) { + return (BarrierOptions)super.position(position); + } + @Override public BarrierOptions getPointer(long i) { + return new BarrierOptions((Pointer)this).offsetAddress(i); + } + + public native @ByRef @Cast("std::vector*") LongVector device_ids(); public native BarrierOptions device_ids(LongVector setter); + public native @ByRef Milliseconds timeout(); public native BarrierOptions timeout(Milliseconds setter); + public native @ByRef DeviceOptional device(); public native BarrierOptions device(DeviceOptional setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImpl.java index ef67cd0b4aa..16d4e54b089 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the BatchNorm1d function. - * See https://pytorch.org/docs/master/nn.html#torch.nn.BatchNorm1d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.BatchNorm1d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::BatchNorm1dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplBase.java index bdc37ef9e6e..d75c5d00853 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplBaseBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplBaseBase.java index b4ae722bcce..0fa03b8821d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplBaseBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplBaseBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplCloneable.java index 3abc8ed6f4f..a487c9c2454 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class BatchNorm1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImpl.java index dca3bbd4e46..f14ed975382 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the BatchNorm2d function. - * See https://pytorch.org/docs/master/nn.html#torch.nn.BatchNorm2d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.BatchNorm2d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::BatchNorm2dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplBase.java index 7cee0bbb592..cf2d9738174 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplBaseBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplBaseBase.java index 098f0c726bc..aedc46fefd1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplBaseBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplBaseBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplCloneable.java index 5b4486f119c..e07f2e7eb04 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class BatchNorm2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImpl.java index 4c53a2502d6..57f2905105e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the BatchNorm3d function. - * See https://pytorch.org/docs/master/nn.html#torch.nn.BatchNorm3d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.BatchNorm3d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::BatchNorm3dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplBase.java index aa430c11fbd..f4e87004a98 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplBaseBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplBaseBase.java index 51b0783d20e..994c2b224bd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplBaseBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplBaseBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplCloneable.java index 1ffac3653fa..0e0c66b1df1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNorm3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class BatchNorm3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNormFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNormFuncOptions.java index 82a33aee581..2166af8925b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNormFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNormFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNormOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNormOptions.java index d2ebc2c0b58..e1012b0546b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNormOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchNormOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSize.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSize.java index b0b0957520b..03b7ba6eb65 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSize.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSize.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSizeOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSizeOptional.java index 526bfd696df..2bae44b8924 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSizeOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSizeOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class BatchSizeOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSizeSampler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSizeSampler.java index a87bb33cff1..0a4b821dfcf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSizeSampler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BatchSizeSampler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearImpl.java index 2ff1d54f60c..2922fa50ae2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Bilinear ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies a billinear transformation with optional bias. - * See https://pytorch.org/docs/master/generated/torch.nn.Bilinear.html to + * See https://pytorch.org/docs/main/generated/torch.nn.Bilinear.html to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::BilinearOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearImplCloneable.java index 20f2c3784bf..7bda99227c6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class BilinearImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearOptions.java index d438dfc0b6d..2004b2917aa 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BilinearOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BinOp.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BinOp.java index 9dd22d383c1..87faabf3beb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BinOp.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BinOp.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -28,8 +29,8 @@ public class BinOp extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public BinOp(Pointer p) { super(p); } - public BinOp(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public BinOp(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr lhs(); public native @ByVal Expr rhs(); public static native @ByVal BinOp create( diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Blob.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Blob.java index 7e38b95d478..2c8827ff6b4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Blob.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Blob.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -111,5 +112,5 @@ public class Blob extends Pointer { /** * \brief Swaps the underlying storage of two blobs. */ - public native void swap(@ByRef Blob rhs); + public native @NoException(true) void swap(@ByRef Blob rhs); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Block.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Block.java index 6528e789332..426e6e02565 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Block.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Block.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BlockArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BlockArrayRef.java index 08f07737a0f..b262fac0d36 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BlockArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BlockArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BlockWrap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BlockWrap.java index 42eabf3ad02..17e7d549b46 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BlockWrap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BlockWrap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolArrayRef.java index 9830abb5b28..7c77931ea39 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolOptional.java index 59675b1de16..3c8f93b5e49 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class BoolOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolType.java index 1f3e6816cbd..849eb63ce57 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolTypePtr.java index 6cdbdc83f8f..a0a9b3a840b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolVector.java index 1b9db184bef..d7c4d9d711d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolVectorOptional.java index 5c0ae0ddcc1..1c77676bc75 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BoolVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BoolVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class BoolVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanElementReference.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanElementReference.java index 024c78a2095..af89c7f720f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanElementReference.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanElementReference.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ public class BooleanElementReference extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public BooleanElementReference(Pointer p) { super(p); } - public native @Name("operator std::conditional_t::type>::value,const bool&,bool>") boolean getBoolean(); + public native @Name("operator std::conditional_t::type>,const bool&,bool>") boolean getBoolean(); @@ -35,7 +36,7 @@ public class BooleanElementReference extends Pointer { public native @Const @ByRef IValue get(); - private static native @Namespace void swap(@ByRef(true) BooleanElementReference lhs, @ByRef(true) BooleanElementReference rhs); + private static native @Namespace @NoException(true) void swap(@ByRef(true) BooleanElementReference lhs, @ByRef(true) BooleanElementReference rhs); public void swap(BooleanElementReference rhs) { swap(this, rhs); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanList.java index 64488385c2c..6b45f706ea3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::List") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::List") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class BooleanList extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanListIterator.java index 8aac364c51c..cbba0d4f792 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BooleanListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Break.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Break.java index 311a86ff665..809c06179d5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Break.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Break.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ public class Break extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Break(Pointer p) { super(p); } - public Break(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Break(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public static native @ByVal Break create(@Const @ByRef SourceRange range); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BroadcastOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BroadcastOptions.java new file mode 100644 index 00000000000..1d27413a439 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BroadcastOptions.java @@ -0,0 +1,44 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class BroadcastOptions extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public BroadcastOptions() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public BroadcastOptions(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public BroadcastOptions(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public BroadcastOptions position(long position) { + return (BroadcastOptions)super.position(position); + } + @Override public BroadcastOptions getPointer(long i) { + return new BroadcastOptions((Pointer)this).offsetAddress(i); + } + + public native @Cast("int64_t") long rootRank(); public native BroadcastOptions rootRank(long setter); + public native @Cast("int64_t") long rootTensor(); public native BroadcastOptions rootTensor(long setter); + public native @ByRef Milliseconds timeout(); public native BroadcastOptions timeout(Milliseconds setter); + public native @Cast("bool") boolean asyncOp(); public native BroadcastOptions asyncOp(boolean setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BucketAccumulator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BucketAccumulator.java new file mode 100644 index 00000000000..98f4304e71c --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BucketAccumulator.java @@ -0,0 +1,44 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +// Local accumulator type for a single bucket. +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class BucketAccumulator extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public BucketAccumulator() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public BucketAccumulator(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public BucketAccumulator(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public BucketAccumulator position(long position) { + return (BucketAccumulator)super.position(position); + } + @Override public BucketAccumulator getPointer(long i) { + return new BucketAccumulator((Pointer)this).offsetAddress(i); + } + + public native @ByRef @Cast("std::vector*") SizeTVector indices(); public native BucketAccumulator indices(SizeTVector setter); + public native @Cast("size_t") long size(); public native BucketAccumulator size(long setter); + public native @Cast("size_t") long size_limit(); public native BucketAccumulator size_limit(long setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BufferPolicy.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BufferPolicy.java index d7eacdbc360..ccf8cdb69e4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BufferPolicy.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BufferPolicy.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BuiltinFunction.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BuiltinFunction.java index 5f9bf340946..e405a3a8d2a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BuiltinFunction.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BuiltinFunction.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BuiltinModule.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BuiltinModule.java index 097921b400e..bb6afe3ba45 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BuiltinModule.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BuiltinModule.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,12 +25,12 @@ public class BuiltinModule extends SugaredValue { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public BuiltinModule(Pointer p) { super(p); } - public BuiltinModule(@StdString BytePointer name, @ByVal(nullValue = "c10::optional(at::nullopt)") LongOptional version) { super((Pointer)null); allocate(name, version); } - private native void allocate(@StdString BytePointer name, @ByVal(nullValue = "c10::optional(at::nullopt)") LongOptional version); + public BuiltinModule(@StdString BytePointer name, @ByVal(nullValue = "std::optional(at::nullopt)") LongOptional version) { super((Pointer)null); allocate(name, version); } + private native void allocate(@StdString BytePointer name, @ByVal(nullValue = "std::optional(at::nullopt)") LongOptional version); public BuiltinModule(@StdString BytePointer name) { super((Pointer)null); allocate(name); } private native void allocate(@StdString BytePointer name); - public BuiltinModule(@StdString String name, @ByVal(nullValue = "c10::optional(at::nullopt)") LongOptional version) { super((Pointer)null); allocate(name, version); } - private native void allocate(@StdString String name, @ByVal(nullValue = "c10::optional(at::nullopt)") LongOptional version); + public BuiltinModule(@StdString String name, @ByVal(nullValue = "std::optional(at::nullopt)") LongOptional version) { super((Pointer)null); allocate(name, version); } + private native void allocate(@StdString String name, @ByVal(nullValue = "std::optional(at::nullopt)") LongOptional version); public BuiltinModule(@StdString String name) { super((Pointer)null); allocate(name); } private native void allocate(@StdString String name); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ByteArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ByteArrayRef.java index 49d4a670633..d8c26caea26 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ByteArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ByteArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ByteOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ByteOptional.java index 829928771c5..0b3912ccbb4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ByteOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ByteOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ByteOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerPair.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerPair.java index ec56895cfd0..6efd0f0d546 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerPair.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerPair.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerPairOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerPairOptional.java index 6d24b3b1f00..3e0d74da818 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerPairOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerPairOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class BytePointerPairOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerVector.java index 5127b9f713c..e11ee212322 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/BytePointerVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ByteVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ByteVector.java new file mode 100644 index 00000000000..2012416254e --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ByteVector.java @@ -0,0 +1,91 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("std::vector") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class ByteVector extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public ByteVector(Pointer p) { super(p); } + public ByteVector(byte value) { this(1); put(0, value); } + public ByteVector(byte ... array) { this(array.length); put(array); } + public ByteVector() { allocate(); } + public ByteVector(long n) { allocate(n); } + private native void allocate(); + private native void allocate(@Cast("size_t") long n); + public native @Name("operator =") @ByRef ByteVector put(@ByRef ByteVector x); + + public boolean empty() { return size() == 0; } + public native long size(); + public void clear() { resize(0); } + public native void resize(@Cast("size_t") long n); + + public byte front() { return get(0); } + public byte back() { return get(size() - 1); } + @Index(function = "at") public native @Cast("uint8_t") byte get(@Cast("size_t") long i); + public native ByteVector put(@Cast("size_t") long i, byte value); + + public native @ByVal Iterator insert(@ByVal Iterator pos, @Cast("uint8_t") byte value); + public native @ByVal Iterator erase(@ByVal Iterator pos); + public native @ByVal Iterator begin(); + public native @ByVal Iterator end(); + @NoOffset @Name("iterator") public static class Iterator extends Pointer { + public Iterator(Pointer p) { super(p); } + public Iterator() { } + + public native @Name("operator ++") @ByRef Iterator increment(); + public native @Name("operator ==") boolean equals(@ByRef Iterator it); + public native @Name("operator *") @Cast("uint8_t") byte get(); + } + + public byte[] get() { + byte[] array = new byte[size() < Integer.MAX_VALUE ? (int)size() : Integer.MAX_VALUE]; + for (int i = 0; i < array.length; i++) { + array[i] = get(i); + } + return array; + } + @Override public String toString() { + return java.util.Arrays.toString(get()); + } + + public byte pop_back() { + long size = size(); + byte value = get(size - 1); + resize(size - 1); + return value; + } + public ByteVector push_back(byte value) { + long size = size(); + resize(size + 1); + return put(size, value); + } + public ByteVector put(byte value) { + if (size() != 1) { resize(1); } + return put(0, value); + } + public ByteVector put(byte ... array) { + if (size() != array.length) { resize(array.length); } + for (int i = 0; i < array.length; i++) { + put(i, array[i]); + } + return this; + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/C10FlagParser.java b/pytorch/src/gen/java/org/bytedeco/pytorch/C10FlagParser.java index 0175cf91c61..6f02e1ac17d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/C10FlagParser.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/C10FlagParser.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/C10dLogger.java b/pytorch/src/gen/java/org/bytedeco/pytorch/C10dLogger.java new file mode 100644 index 00000000000..c2e20f8344c --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/C10dLogger.java @@ -0,0 +1,36 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class C10dLogger extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public C10dLogger(Pointer p) { super(p); } + + public C10dLogger(@Const @ByRef C10dLogger arg0) { super((Pointer)null); allocate(arg0); } + private native void allocate(@Const @ByRef C10dLogger arg0); + + public native @ByRef @Name("operator =") C10dLogger put(@Const @ByRef C10dLogger arg0); + + public native void log(@Const @ByRef C10dLoggingData data); + public static native C10dLogger getLogger(); + public static native void registerLogger(@UniquePtr C10dLogger arg0); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/C10dLoggingData.java b/pytorch/src/gen/java/org/bytedeco/pytorch/C10dLoggingData.java new file mode 100644 index 00000000000..774b3f77f76 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/C10dLoggingData.java @@ -0,0 +1,47 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +// a generic logging data struct that holds different types of logging data. +// starting with key value pairs of strings and integers, +// It can be extended to more types as needed. +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class C10dLoggingData extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public C10dLoggingData() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public C10dLoggingData(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public C10dLoggingData(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public C10dLoggingData position(long position) { + return (C10dLoggingData)super.position(position); + } + @Override public C10dLoggingData getPointer(long i) { + return new C10dLoggingData((Pointer)this).offsetAddress(i); + } + + // logging fields that are string types. + public native @ByRef @NoOffset StringStringMap strings(); public native C10dLoggingData strings(StringStringMap setter); + // logging fields that are int64_t types. + public native @ByRef @NoOffset StringLongMap integers(); public native C10dLoggingData integers(StringLongMap setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CELUImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CELUImpl.java index 09ceda6e127..0fcdac0fa95 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CELUImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CELUImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CELU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies celu over a given input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.CELU to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.CELU to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::CELUOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CELUImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CELUImplCloneable.java index ca1bf9c15fd..c5755d11b03 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CELUImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CELUImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class CELUImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CELUOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CELUOptions.java index 433be6070af..3a8f7d874ee 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CELUOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CELUOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CPUGeneratorImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CPUGeneratorImpl.java index be3c5e63022..36feaf29019 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CPUGeneratorImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CPUGeneratorImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -38,7 +39,7 @@ public class CPUGeneratorImpl extends GeneratorImpl { public native @Cast("uint64_t") long current_seed(); public native @Cast("uint64_t") long seed(); public native void set_state(@Const @ByRef TensorImpl new_state); - public native @ByVal TensorImplPtr get_state(); + public native @IntrusivePtr("c10::TensorImpl") @Cast({"", "c10::intrusive_ptr&"}) TensorImpl get_state(); public static native DeviceType device_type(); public native @Cast("uint32_t") int random(); public native @Cast("uint64_t") long random64(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossImpl.java index 72bfd641ff3..e46a1328190 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CTCLoss ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** The Connectionist Temporal Classification loss. - * See https://pytorch.org/docs/master/nn.html#torch.nn.CTCLoss to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.CTCLoss to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::CTCLossOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossImplCloneable.java index c15d39e20c2..a10426829c4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class CTCLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossOptions.java index 910b3ff8d97..e27d982ac77 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CTCLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CUDAHooksArgs.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CUDAHooksArgs.java index bda89287dc0..ff3aa24379c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CUDAHooksArgs.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CUDAHooksArgs.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CUDAHooksInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CUDAHooksInterface.java index 3663700ec5d..2ce28032170 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CUDAHooksInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CUDAHooksInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -76,6 +77,8 @@ public class CUDAHooksInterface extends AcceleratorHooksInterface { public native @Cast("bool") boolean hasCuSOLVER(); + public native @Cast("bool") boolean hasCuBLASLt(); + public native @Cast("bool") boolean hasROCM(); public native @Cast("const at::cuda::NVRTC*") @ByRef Pointer nvrtc(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CacheKey.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CacheKey.java new file mode 100644 index 00000000000..77dcaec0bcd --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CacheKey.java @@ -0,0 +1,46 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("torch::dynamo::autograd") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class CacheKey extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public CacheKey(Pointer p) { super(p); } + + // Key to find the next node in the shadow graph. We use C++ RTTI for the + // type of the node (ntype), then a key generated with a visitor pattern. + public CacheKey(@ByRef @Cast("std::type_index*") Pointer ntype, @Cast("const uint8_t*") BytePointer key, @Cast("uint16_t") short len) { super((Pointer)null); allocate(ntype, key, len); } + private native void allocate(@ByRef @Cast("std::type_index*") Pointer ntype, @Cast("const uint8_t*") BytePointer key, @Cast("uint16_t") short len); + public CacheKey(@ByRef @Cast("std::type_index*") Pointer ntype, @Cast("const uint8_t*") ByteBuffer key, @Cast("uint16_t") short len) { super((Pointer)null); allocate(ntype, key, len); } + private native void allocate(@ByRef @Cast("std::type_index*") Pointer ntype, @Cast("const uint8_t*") ByteBuffer key, @Cast("uint16_t") short len); + public CacheKey(@ByRef @Cast("std::type_index*") Pointer ntype, @Cast("const uint8_t*") byte[] key, @Cast("uint16_t") short len) { super((Pointer)null); allocate(ntype, key, len); } + private native void allocate(@ByRef @Cast("std::type_index*") Pointer ntype, @Cast("const uint8_t*") byte[] key, @Cast("uint16_t") short len); + + public native @Cast("bool") @Name("operator <") boolean lessThan(@Const @ByRef CacheKey other); + + public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef CacheKey other); + + public native @Cast("size_t") long hash(); + + public native @ByRef @Cast("std::type_index*") Pointer node_type(); public native CacheKey node_type(Pointer setter); + public native @Cast("uint16_t") short key_size(); public native CacheKey key_size(short setter); + public native @Cast("const uint8_t*") BytePointer key(); public native CacheKey key(BytePointer setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CacheKeyBuffer.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CacheKeyBuffer.java new file mode 100644 index 00000000000..1f75d286956 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CacheKeyBuffer.java @@ -0,0 +1,35 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("torch::dynamo::autograd") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class CacheKeyBuffer extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public CacheKeyBuffer(Pointer p) { super(p); } + + public CacheKeyBuffer(@Cast("const uint8_t*") BytePointer key, @Cast("uint16_t") short len) { super((Pointer)null); allocate(key, len); } + private native void allocate(@Cast("const uint8_t*") BytePointer key, @Cast("uint16_t") short len); + public CacheKeyBuffer(@Cast("const uint8_t*") ByteBuffer key, @Cast("uint16_t") short len) { super((Pointer)null); allocate(key, len); } + private native void allocate(@Cast("const uint8_t*") ByteBuffer key, @Cast("uint16_t") short len); + public CacheKeyBuffer(@Cast("const uint8_t*") byte[] key, @Cast("uint16_t") short len) { super((Pointer)null); allocate(key, len); } + private native void allocate(@Cast("const uint8_t*") byte[] key, @Cast("uint16_t") short len); + public native @Cast("const uint8_t*") BytePointer get(); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Call.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Call.java index 4b0f13710ad..5d7f1eb2d34 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Call.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Call.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CapsuleType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CapsuleType.java index 6cdc616619f..e3bd8091f70 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CapsuleType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CapsuleType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CapsuleTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CapsuleTypePtr.java index 02c266c4cce..072b05c45f4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CapsuleTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CapsuleTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CastValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CastValue.java index 485082ee80a..4164b986f9c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CastValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CastValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchDataset.java index 4bd639f1d2a..3756cba2e7b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("torch::data::datasets::BatchDataset,c10::optional,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("torch::data::datasets::BatchDataset,std::optional,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ChunkBatchDataset extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchSharedBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchSharedBatchDataset.java index 372fe258744..225630df0ab 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchSharedBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchSharedBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("torch::data::datasets::BatchDataset >,c10::optional,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("torch::data::datasets::BatchDataset >,std::optional,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ChunkBatchSharedBatchDataset extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchSharedTensorBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchSharedTensorBatchDataset.java index ceed73e0d0f..25f7ef669d2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchSharedTensorBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkBatchSharedTensorBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("torch::data::datasets::BatchDataset >,c10::optional,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("torch::data::datasets::BatchDataset >,std::optional,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ChunkBatchSharedTensorBatchDataset extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDataReader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDataReader.java index a95b3f7fbf2..0dbed3a8e6e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDataReader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDataReader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDataset.java index 25de5e1c011..3b20daf8f56 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDatasetOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDatasetOptions.java index c7c7ea60241..b0c4ba70fc2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDatasetOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkDatasetOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapBatchDataset.java index d90c8378d8b..3a8b2770d2f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapDataset.java index 76e2b21ce91..c0940c290dd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapTensorBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapTensorBatchDataset.java index 191ed20c628..b0e5062563b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapTensorBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapTensorBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapTensorDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapTensorDataset.java index 11471071c9c..ca3f190f057 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapTensorDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkMapTensorDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomDataLoader.java index eff35875117..d157ad453ba 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomDataLoaderBase.java index d2b3dd7d98e..0727473c78b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomTensorDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomTensorDataLoader.java index 316fa2a369e..d584a2731a6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomTensorDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomTensorDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomTensorDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomTensorDataLoaderBase.java index 771c1076a83..17b5fe6767c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomTensorDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRandomTensorDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRecordIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRecordIterator.java index aa22654d8d7..33833dead5f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRecordIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkRecordIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkSharedBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkSharedBatchDataset.java index 098e8c2c591..d3875745ed8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkSharedBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkSharedBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkSharedTensorBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkSharedTensorBatchDataset.java index 1daed38af62..ca2d5b80381 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkSharedTensorBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkSharedTensorBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkStatefulDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkStatefulDataset.java index 48649299a16..52e656ff8fd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkStatefulDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkStatefulDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkStatefulTensorDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkStatefulTensorDataset.java index d40d1dc2c25..3473d534e49 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkStatefulTensorDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkStatefulTensorDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorBatchDataset.java index c9c16904e1b..81fdaf27545 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("torch::data::datasets::BatchDataset,c10::optional,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("torch::data::datasets::BatchDataset,std::optional,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ChunkTensorBatchDataset extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorDataReader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorDataReader.java index 3f165de5493..c354fa645db 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorDataReader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorDataReader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorDataset.java index bb02ac9f39a..668f11d5253 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ChunkTensorDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ClassAttribute.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ClassAttribute.java index 245e1e947ff..859180c2f12 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ClassAttribute.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ClassAttribute.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ClassDef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ClassDef.java index 4f6e30671a4..8deafc600b6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ClassDef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ClassDef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class ClassDef extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public ClassDef(Pointer p) { super(p); } - public ClassDef(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public ClassDef(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal ClassDef withName(@StdString BytePointer new_name); public native @ByVal ClassDef withName(@StdString String new_name); public native @ByVal Ident name(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ClassType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ClassType.java index 8a19ca8e0ee..4118b5e9e37 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ClassType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ClassType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -50,6 +51,21 @@ public static class Property extends Pointer { } // Create a class type with name `name` and its methods stored in `cu`. + public static native @SharedPtr("c10::ClassType") @ByVal ClassType create( + @ByVal QualifiedNameOptional qualifiedName, + @WeakPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu, + @Cast("bool") boolean is_module/*=false*/, + @StdString BytePointer doc_string/*=""*/, + @ByVal(nullValue = "std::vector{}") StringVector unresolved_class_attributes); + public static native @SharedPtr("c10::ClassType") @ByVal ClassType create( + @ByVal QualifiedNameOptional qualifiedName, + @WeakPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu); + public static native @SharedPtr("c10::ClassType") @ByVal ClassType create( + @ByVal QualifiedNameOptional qualifiedName, + @WeakPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu, + @Cast("bool") boolean is_module/*=false*/, + @StdString String doc_string/*=""*/, + @ByVal(nullValue = "std::vector{}") StringVector unresolved_class_attributes); public native @Cast("bool") boolean equals(@Const @ByRef Type rhs); @@ -243,7 +259,7 @@ public native void checkForwardHookSchema( public native void unsafeRemoveMethod(@StdString BytePointer name); public native void unsafeRemoveMethod(@StdString String name); - public native @SharedPtr CompilationUnit compilation_unit(); + public native @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit compilation_unit(); // generate a refined version of this class. // It has the same name but the slot Types are subtypes of diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ClassTypePropertyOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ClassTypePropertyOptional.java index 0f6ee68e172..7223c08dae0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ClassTypePropertyOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ClassTypePropertyOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ClassTypePropertyOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ClassValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ClassValue.java index b740cf551de..c006d9cc409 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ClassValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ClassValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ClosureValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ClosureValue.java index 8a2c916eebd..2025e194e4e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ClosureValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ClosureValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Code.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Code.java index b779f88056b..2fbec6db4b2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Code.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Code.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CodeImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CodeImpl.java index 56fe902a48c..da45fe4515b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CodeImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CodeImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CommHookInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CommHookInterface.java new file mode 100644 index 00000000000..cea0004a55d --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CommHookInterface.java @@ -0,0 +1,43 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +// Base class of both `PythonCommHook` and `CppCommHook`. +// Requires implementing 1) `runHook` method that communicates gradients +// asynchronously, and 2) `parseHookResult` method that converts the hook +// result into a tensor. +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class CommHookInterface extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public CommHookInterface(Pointer p) { super(p); } + + + // Passes the input grad bucket to the registered communication hook. + // Once the tensor in the bucket are ready, kicks off the hook asynchronously + // and returns a future that holds the communication results. + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future runHook( + @ByRef GradBucket bucket); + + // Returns the resulting tensor once the communication hook result is + // ready. The resulting tensor will then be copied to the grads of + // individual parameters. + public native @ByVal Tensor parseHookResult(@Const @ByRef IValue result); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CompilationUnit.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CompilationUnit.java index 80bfa8cea8e..954bb5cc311 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CompilationUnit.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CompilationUnit.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -71,7 +72,7 @@ public enum FunctionType { Method(0), Hook(1), PreHook(2); @Const @ByRef ResolverVector defResolvers, @Const Self self, @Cast("bool") boolean shouldMangle/*=false*/, - @ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional operator_set_version); + @ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional operator_set_version); public native @ByVal FunctionVector define( @Const @ByRef QualifiedNameOptional prefix, @Const @ByRef PropertyVector properties, diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CompileTimeEmptyString.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CompileTimeEmptyString.java index 3b493e6d8cb..3f07592738f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CompileTimeEmptyString.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CompileTimeEmptyString.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CompiledNodeArgs.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CompiledNodeArgs.java index 778fdee0f15..1e6ede7a4dd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CompiledNodeArgs.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CompiledNodeArgs.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,13 +13,77 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Namespace("torch::dynamo::autograd") @Opaque @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) + +@Namespace("torch::dynamo::autograd") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class CompiledNodeArgs extends Pointer { - /** Empty constructor. Calls {@code super((Pointer)null)}. */ - public CompiledNodeArgs() { super((Pointer)null); } + static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public CompiledNodeArgs(Pointer p) { super(p); } + + public native void collect(@Const @ByRef DynamoTensorArg t); + + public native void collect(@Const @ByRef Tensor t); + public native void collect(@Const @ByRef SymInt t); + public native void collect(@Const @ByRef IValue iv); + public native void collect(@Const @ByRef Scalar t); + public native void collect(@Const @ByRef TensorOptions t); + public native void collect(@Const @ByRef TensorGeometry t); + public native void collect(@Const @ByRef Device t); + public native void collect(@StdString BytePointer t); + public native void collect(@StdString String t); + public native void collect(@Const @ByRef TypeMeta t); + public native void collect(@Cast({"", "const std::shared_ptr"}) @SharedPtr Node t); + public native void collect(@Const @ByRef NodeCall t); + public native void collect(@Const @ByRef Edge t); + public native void collect(@Const @ByRef VariableInfo t); + public native @Cast("bool") boolean cond(@Cast("bool") boolean cond); + +// #define COLLECT_AS_BYTES(T) +// void collect(T t) { +// specialize_on_bytes(t); +// } + public native void collect(ScalarType t); + public native void collect(DeviceType t); + public native void collect(@Cast("c10::DeviceType") byte t); + public native void collect(Layout t); + public native void collect(MemoryFormat t); + public native void collect(short t); + public native void collect(int t); + public native void collect(@Cast("int64_t") long t); + public native void collect(@Cast("bool") boolean t); + public native void collect(float t); + public native void collect(double t); +// #undef COLLECT_AS_BYTES + + public native void collect_hooks_from(Node fn); + + public native @ByVal CacheKey key(); + + public native @Cast("size_t") long add_backward(@ByRef(true) SafePyObject obj); + + public native @Cast("size_t") long add_backward_state(@ByRef(true) SafePyObject obj); + + public native void add_tensor_pre_hook(@ByRef(true) SafePyObject obj, int index); + + public native void add_pre_hook(@ByRef(true) SafePyObject obj); + + public native void add_post_hook(@ByRef(true) SafePyObject obj); + + public native void add_post_acc_grad_hook(@ByRef(true) SafePyObject obj); + + // Need to template the size_t to silence internal 32-bit build errors due to + // a mix of -Werror, -Wtautological-type-limit-compare and + // -Wunknown-pragmas + + public native SizeInput.DynType set_default_dyn_type(SizeInput.DynType default_dyn_type); + public native @Cast("torch::dynamo::autograd::SizeInput::DynType") byte set_default_dyn_type(@Cast("torch::dynamo::autograd::SizeInput::DynType") byte default_dyn_type); + + public CompiledNodeArgs(@ByRef AutogradCompilerCall compiler, @ByRef NodeCall node_call) { super((Pointer)null); allocate(compiler, node_call); } + private native void allocate(@ByRef AutogradCompilerCall compiler, @ByRef NodeCall node_call); + } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ComplexType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ComplexType.java index 2b4228b06da..ef122442d94 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ComplexType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ComplexType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ComplexTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ComplexTypePtr.java index 40343406921..a8f92c42f3f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ComplexTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ComplexTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Compound.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Compound.java index 097c118f750..84cedd936d1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Compound.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Compound.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -29,7 +30,7 @@ public class Compound extends Tree { public Compound(int kind, @Const @ByRef SourceRange range_, @Cast("torch::jit::TreeList*") @ByRef(true) SymDimVector trees_) { super((Pointer)null); allocate(kind, range_, trees_); } private native void allocate(int kind, @Const @ByRef SourceRange range_, @Cast("torch::jit::TreeList*") @ByRef(true) SymDimVector trees_); public native @Cast("const torch::jit::TreeList*") @ByRef SymDimVector trees(); - public static native @ByVal TreeRef create( + public static native @IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree create( int kind, @Const @ByRef SourceRange range_, @Cast("torch::jit::TreeList*") @ByRef(true) SymDimVector trees_); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstExpr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstExpr.java index 9f15c3265af..13ec4184bca 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstExpr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstExpr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class ConstExpr extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public ConstExpr(Pointer p) { super(p); } - public ConstExpr(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public ConstExpr(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @Cast("bool") boolean isFloatingPoint(); public native @Cast("bool") boolean isIntegral(); public native @Cast("bool") boolean isComplex(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImpl.java index c756a5be903..aedd9ad6744 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ConstantPad1d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies ConstantPad over a 1-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ConstantPad1d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.ConstantPad1d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::ConstantPad1dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImplBase.java index ba3b49901cd..9cf6889f720 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImplCloneable.java index eb4a8ed8c7b..6879bed85d7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ConstantPad1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dOptions.java index df804685b72..bd6ff48021f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImpl.java index 120ff47601e..c7335c30896 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ConstantPad2d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies ConstantPad over a 2-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ConstantPad2d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.ConstantPad2d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::ConstantPad2dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImplBase.java index 84517d5fcb2..45781ef4327 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImplCloneable.java index 81323b5e5c0..34d9fde521d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ConstantPad2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dOptions.java index e6c3d2da67e..32db42c471c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImpl.java index 34fa4bda9f6..d9b6ff395c6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ConstantPad3d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies ConstantPad over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ConstantPad3d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.ConstantPad3d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::ConstantPad3dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImplBase.java index df7449f8b7d..4fff865b8b0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImplCloneable.java index 18409adf29e..35b21d86868 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ConstantPad3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dOptions.java index cadeea7485d..f17b82fcfaa 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantPad3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantString.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantString.java index 74e14d15f19..1e35d725874 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantString.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantString.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -26,11 +27,11 @@ public class ConstantString extends Pointer { public ConstantString(Pointer p) { super(p); } public ConstantString(@StdString BytePointer str) { super((Pointer)null); allocate(str); } - private native void allocate(@StdString BytePointer str); + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(@StdString BytePointer str); public ConstantString(@StdString String str) { super((Pointer)null); allocate(str); } - private native void allocate(@StdString String str); - public static native @ByVal ConstantStringPtr create(@StdString BytePointer str_); - public static native @ByVal ConstantStringPtr create(@StdString String str_); + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(@StdString String str); + public static native @IntrusivePtr("c10::ivalue::ConstantString") @Cast({"", "c10::intrusive_ptr&"}) ConstantString create(@StdString BytePointer str_); + public static native @IntrusivePtr("c10::ivalue::ConstantString") @Cast({"", "c10::intrusive_ptr&"}) ConstantString create(@StdString String str_); public native @StdString BytePointer string(); public native @StringView BytePointer string_view(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantStringPtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantStringPtr.java deleted file mode 100644 index 5953da42afc..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConstantStringPtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class ConstantStringPtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public ConstantStringPtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public ConstantStringPtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public ConstantStringPtr position(long position) { - return (ConstantStringPtr)super.position(position); - } - @Override public ConstantStringPtr getPointer(long i) { - return new ConstantStringPtr((Pointer)this).offsetAddress(i); - } - - - public ConstantStringPtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public ConstantStringPtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public ConstantStringPtr(ConstantString target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(ConstantString target, @ByVal DontIncreaseRefcount arg1); - - - - public ConstantStringPtr(@ByRef(true) ConstantStringPtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) ConstantStringPtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) ConstantStringPtr put(@ByRef(true) ConstantStringPtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) ConstantString get(); - - public native @ByRef @Name("operator *") @NoException(true) ConstantString multiply(); - - public native @Name("operator ->") @NoException(true) ConstantString access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef ConstantStringPtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) ConstantString release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal ConstantStringPtr reclaim(ConstantString owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal ConstantStringPtr reclaim_copy(ConstantString owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal ConstantStringPtr unsafe_steal_from_new(ConstantString raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal ConstantStringPtr unsafe_adapt_non_heap_allocated( - ConstantString raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal ConstantStringPtr unsafe_reclaim_from_nonowning(ConstantString raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Context.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Context.java index 0a261ba302a..148e6fd68be 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Context.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Context.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -38,7 +39,7 @@ public class Context extends Pointer { public native @Const @ByRef Generator defaultGenerator(@ByVal Device device); public native @Const @ByRef AcceleratorHooksInterface getAcceleratorHooksInterface( - @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceTypeOptional opt_device_type); + @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceTypeOptional opt_device_type); public native @Const @ByRef AcceleratorHooksInterface getAcceleratorHooksInterface(); public native @ByVal Device getDeviceFromPtr(Pointer data, DeviceType device_type); public native @ByVal Device getDeviceFromPtr(Pointer data, @Cast("c10::DeviceType") byte device_type); @@ -55,18 +56,20 @@ public class Context extends Pointer { public static native @Cast("bool") boolean hasCuDNN(); public static native long versionCuDNN(); public static native @Cast("bool") boolean hasCuSOLVER(); + public static native @Cast("bool") boolean hasCuBLASLt(); public static native @Cast("bool") boolean hasHIP(); public static native @Cast("bool") boolean hasMPS(); public static native @Cast("bool") boolean hasIPU(); public static native @Cast("bool") boolean hasXLA(); public static native @Cast("bool") boolean hasXPU(); public static native @Cast("bool") boolean hasLazy(); - public static native @Cast("bool") boolean hasORT(); + public static native @Cast("bool") boolean hasMAIA(); // defined in header so that getNonVariableType has ability to inline // call_once check. getNonVariableType is called fairly frequently public native void lazyInitCUDA(); public native void lazyInitHIP(); public native void lazyInitXPU(); + public native void lazyInitMTIA(); public native void lazyInitPrivateUse1(); public static native @Cast("const at::cuda::NVRTC*") @ByRef Pointer getNVRTC(); @@ -116,6 +119,10 @@ public class Context extends Pointer { public native void setLinalgPreferredBackend(LinalgBackend arg0); public native void setLinalgPreferredBackend(@Cast("at::LinalgBackend") byte arg0); + public native BlasBackend blasPreferredBackend(); + public native void setBlasPreferredBackend(BlasBackend arg0); + public native void setBlasPreferredBackend(@Cast("at::BlasBackend") byte arg0); + // Note [Enabling Deterministic Operations] // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ // Operations in PyTorch that normally act nondeterministically, but have an diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Continue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Continue.java index 0a47ef3e3db..c3482512ca1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Continue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Continue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ public class Continue extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Continue(Pointer p) { super(p); } - public Continue(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Continue(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public static native @ByVal Continue create(@Const @ByRef SourceRange range); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dFuncOptions.java index f0ab894ff77..1c1b422df7c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImpl.java index 9598c469633..5706051a968 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Conv1d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies convolution over a 1-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Conv1d to learn about + * See https://pytorch.org/docs/main/nn.html#torch.nn.Conv1d to learn about * the exact behavior of this module. * * See the documentation for {@code torch::nn::Conv1dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImplBase.java index f41914c9da3..995da7cfd6a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImplCloneable.java index 33989a694a0..c074380c569 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class Conv1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dOptions.java index c33496b60be..0c0d40a649d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dPadding.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dPadding.java index 4b7fec2f254..a53e34a8299 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dPadding.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv1dPadding.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dFuncOptions.java index e584973c787..2aee8d5876f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImpl.java index bcbba04019d..b33153d3e40 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Conv2d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies convolution over a 2-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Conv2d to learn about + * See https://pytorch.org/docs/main/nn.html#torch.nn.Conv2d to learn about * the exact behavior of this module. * * See the documentation for {@code torch::nn::Conv2dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImplBase.java index 69e7a14c0b7..ef8fae66a6b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImplCloneable.java index fa81cc3a2c5..fddb42dce8a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class Conv2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dOptions.java index ba32723ac78..3957baec392 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dPadding.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dPadding.java index 61c4261f4b2..64929a64365 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dPadding.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv2dPadding.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dFuncOptions.java index a2d7a206d87..1d5ba6e1e66 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImpl.java index 43ec04d34ab..0990a9fd25d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Conv3d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies convolution over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Conv3d to learn about + * See https://pytorch.org/docs/main/nn.html#torch.nn.Conv3d to learn about * the exact behavior of this module. * * See the documentation for {@code torch::nn::Conv3dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImplBase.java index 466040b9962..e47b595e7d3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImplCloneable.java index a67ab190ce9..81aa2a34c37 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class Conv3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dOptions.java index e7dd8351238..19014e17648 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dPadding.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dPadding.java index 86affc897c2..5cebe7a38fc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dPadding.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Conv3dPadding.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvPaddingMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvPaddingMode.java index 6454ff02e98..bb99d013380 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvPaddingMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvPaddingMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dFuncOptions.java index 1dee0a27b19..05f4d981c2c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImpl.java index 70c2af64001..0d180745418 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the ConvTranspose1d function. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ConvTranspose1d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.ConvTranspose1d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::ConvTranspose1dOptions} class to learn @@ -51,10 +52,10 @@ public ConvTranspose1dImpl( @SharedPtr @Name("std::make_shared") private native void allocate(@ByVal ConvTranspose1dOptions options_); public native @ByVal Tensor forward( @Const @ByRef Tensor input, - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") LongArrayRefOptional output_size); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") LongArrayRefOptional output_size); public native @ByVal Tensor forward( @Const @ByRef Tensor input); public native @ByVal Tensor forward( @Const @ByRef Tensor input, - @ByRef(nullValue = "c10::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); + @ByRef(nullValue = "std::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplBase.java index 486f8e6de8c..903bd4f22eb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplBaseBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplBaseBase.java index 7fc4690f041..8ad86c01ff2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplBaseBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplBaseBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplCloneable.java index ca6eb38739d..fab0daed646 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ConvTranspose1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dOptions.java index f8cf246f305..392da4398ec 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dFuncOptions.java index 336ac8f6073..21ac7f0b7c0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImpl.java index f1ac7203ac1..e00009621c0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the ConvTranspose2d function. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ConvTranspose2d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.ConvTranspose2d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::ConvTranspose2dOptions} class to learn @@ -51,10 +52,10 @@ public ConvTranspose2dImpl( @SharedPtr @Name("std::make_shared") private native void allocate(@ByVal ConvTranspose2dOptions options_); public native @ByVal Tensor forward( @Const @ByRef Tensor input, - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") LongArrayRefOptional output_size); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") LongArrayRefOptional output_size); public native @ByVal Tensor forward( @Const @ByRef Tensor input); public native @ByVal Tensor forward( @Const @ByRef Tensor input, - @ByRef(nullValue = "c10::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); + @ByRef(nullValue = "std::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplBase.java index 2b6747cda3e..b34d97a1da6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplBaseBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplBaseBase.java index 2c6a0855d50..3fb46498c6e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplBaseBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplBaseBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplCloneable.java index dd57a5ddc90..47ec0dc5f2a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ConvTranspose2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dOptions.java index ca3021ac5bd..e7955329632 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dFuncOptions.java index 86b32722e3f..e5700dfd23d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImpl.java index 6762020b959..6eb214538c3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the ConvTranspose3d function. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ConvTranspose3d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.ConvTranspose3d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::ConvTranspose3dOptions} class to learn @@ -51,10 +52,10 @@ public ConvTranspose3dImpl( @SharedPtr @Name("std::make_shared") private native void allocate(@ByVal ConvTranspose3dOptions options_); public native @ByVal Tensor forward( @Const @ByRef Tensor input, - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") LongArrayRefOptional output_size); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") LongArrayRefOptional output_size); public native @ByVal Tensor forward( @Const @ByRef Tensor input); public native @ByVal Tensor forward( @Const @ByRef Tensor input, - @ByRef(nullValue = "c10::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); + @ByRef(nullValue = "std::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplBase.java index ed5d75ef5ae..69544be67fb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplBaseBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplBaseBase.java index d15f5df1cb6..67fcf364127 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplBaseBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplBaseBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplCloneable.java index 65b59c547c4..15f840cae2e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ConvTranspose3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dOptions.java index 29a0fe486a2..31492feba48 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ConvTranspose3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossImpl.java index 3a2da58d4a3..aaca0e581d1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -26,7 +27,7 @@ * -1. This is used for measuring whether two inputs are similar or * dissimilar, using the cosine distance, and is typically used for learning * nonlinear embeddings or semi-supervised learning. - * See https://pytorch.org/docs/master/nn.html#torch.nn.CosineEmbeddingLoss to + * See https://pytorch.org/docs/main/nn.html#torch.nn.CosineEmbeddingLoss to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::CosineEmbeddingLossOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossImplCloneable.java index 3b8c3e0440e..51c7b9676c1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class CosineEmbeddingLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossOptions.java index 8275ab12f30..6905a5ef1fe 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineEmbeddingLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityImpl.java index 827f15cb262..0ea7023d8d4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,13 +13,15 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; /** Returns the cosine similarity between :math:{@code x_1} and :math:{@code x_2}, computed * along {@code dim}. - * See https://pytorch.org/docs/master/nn.html#torch.nn.CosineSimilarity to + * See https://pytorch.org/docs/main/nn.html#torch.nn.CosineSimilarity to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::CosineSimilarityOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityImplCloneable.java index 8a650a3a79b..a4ca54aa408 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class CosineSimilarityImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityOptions.java index b208d407929..20327854a2a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CosineSimilarityOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CppFunction.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CppFunction.java index b89554d9d1e..7dfc2e29728 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CppFunction.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CppFunction.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CppSignature.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CppSignature.java index a2385feeac3..7478aefeb33 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CppSignature.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CppSignature.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CppSignatureOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CppSignatureOptional.java index cae6a80365f..1b344328df1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CppSignatureOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CppSignatureOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class CppSignatureOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossImpl.java index 1172451ffa8..a3eb6dc0fea 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -23,7 +24,7 @@ /** Creates a criterion that computes cross entropy loss between input and * target. See - * https://pytorch.org/docs/master/nn.html#torch.nn.CrossEntropyLoss to learn + * https://pytorch.org/docs/main/nn.html#torch.nn.CrossEntropyLoss to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::CrossEntropyLossOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossImplCloneable.java index 4842962650d..5681332dba9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class CrossEntropyLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossOptions.java index db83588ab31..fb806c1a77e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossEntropyLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dImpl.java index 6859e3b05c2..feb47c6c5d9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dImplCloneable.java index 407fadcba12..6142160122d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class CrossMapLRN2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dOptions.java index 5f133a4d68b..e3b8e39a34f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CrossMapLRN2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CustomBatchRequest.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CustomBatchRequest.java index 479428dbcc1..069e6deb6d4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CustomBatchRequest.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CustomBatchRequest.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/CustomClassHolder.java b/pytorch/src/gen/java/org/bytedeco/pytorch/CustomClassHolder.java index 31e913b9cc5..3c221bb2056 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/CustomClassHolder.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/CustomClassHolder.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DDPLoggingData.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DDPLoggingData.java index 4eab385c552..7539abb13b3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DDPLoggingData.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DDPLoggingData.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DataLoaderOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DataLoaderOptions.java index 58c29b4d1a4..6b9ea7821e0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DataLoaderOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DataLoaderOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,7 +33,7 @@ public class DataLoaderOptions extends Pointer { public native @Cast("size_t*") @ByRef @NoException(true) SizeTPointer batch_size(); public native @Cast("size_t*") @ByRef @NoException(true) SizeTPointer workers(); public native @ByRef @NoException(true) SizeTOptional max_jobs(); - public native @Cast("c10::optional*") @ByRef @NoException(true) Pointer timeout(); + public native @Optional @NoException(true) Milliseconds timeout(); public native @Cast("bool*") @ByRef @NoException(true) BoolPointer enforce_ordering(); public native @Cast("bool*") @ByRef @NoException(true) BoolPointer drop_last(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DataPtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DataPtr.java index 01a4199309d..089d6c278c8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DataPtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DataPtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DataPtrVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DataPtrVector.java index cd4bddf0dd1..30e4263fda4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DataPtrVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DataPtrVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DebugInfoBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DebugInfoBase.java index 9aeb5ca081e..aa803b1d42f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DebugInfoBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DebugInfoBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DebugInfoGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DebugInfoGuard.java index f0e50ba3b9c..5b69bcb1754 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DebugInfoGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DebugInfoGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Decl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Decl.java index 5282d287598..4df609ce9a1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Decl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Decl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -28,8 +29,8 @@ public class Decl extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Decl(Pointer p) { super(p); } - public Decl(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Decl(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal ParamList params(); public native @ByVal ExprMaybe return_type(); public static native @ByVal Decl create( diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Def.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Def.java index e81b9338315..f9485eb81d6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Def.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Def.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Def extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Def(Pointer p) { super(p); } - public Def(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Def(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Def withName(@StdString BytePointer new_name); public native @ByVal Def withName(@StdString String new_name); public native @ByVal Def withDecl(@Const @ByRef Decl decl); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DefMaybe.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DefMaybe.java index e1d98af8c93..f4d70bb0b06 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DefMaybe.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DefMaybe.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class DefMaybe extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public DefMaybe(Pointer p) { super(p); } - public DefMaybe(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public DefMaybe(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); /* implicit */ public DefMaybe(@Const @ByRef Def tree) { super((Pointer)null); allocate(tree); } private native void allocate(@Const @ByRef Def tree); public native @Cast("bool") boolean present(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DefVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DefVector.java index a50f96b8a82..f28a350bb50 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DefVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DefVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Delete.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Delete.java index 39b34b7f541..f3bd4acb8c4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Delete.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Delete.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Delete extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Delete(Pointer p) { super(p); } - public Delete(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Delete(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal ExprList targets(); public static native @ByVal Delete create(@Const @ByRef SourceRange range, @Const @ByRef ExprList targets); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DeserializationStorageContext.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DeserializationStorageContext.java index 92500bc5cc3..331ad17bfa6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DeserializationStorageContext.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DeserializationStorageContext.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv1dOptions.java index 5266008478c..b463ceffdc0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv2dOptions.java index 129ea188245..652c544b2a1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv3dOptions.java index 0c81c19afe2..ba0942ba9c5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DetailConv3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DetectAnomalyGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DetectAnomalyGuard.java index e7450aad17d..cf7eb092fa8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DetectAnomalyGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DetectAnomalyGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Device.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Device.java index 2bb6ffd6c06..125f4d2aaea 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Device.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Device.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -115,8 +116,8 @@ public class Device extends Pointer { /** Return true if the device is of Metal type. */ public native @Cast("bool") @NoException(true) boolean is_metal(); - /** Return true if the device is of ORT type. */ - public native @Cast("bool") @NoException(true) boolean is_ort(); + /** Return true if the device is of MAIA type. */ + public native @Cast("bool") @NoException(true) boolean is_maia(); /** Return true if the device is of META type. */ public native @Cast("bool") @NoException(true) boolean is_meta(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceGuardImplInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceGuardImplInterface.java index 30ba04ce7ae..3ad07ca0bbb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceGuardImplInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceGuardImplInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -97,6 +98,14 @@ public class DeviceGuardImplInterface extends Pointer { public native @ByVal Stream getStreamFromGlobalPool(@ByVal Device arg0, @Cast("bool") boolean isHighPriority/*=false*/); public native @ByVal Stream getStreamFromGlobalPool(@ByVal Device arg0); + /** + * Return a new stream for a given device and priority. The stream will be + * copied and shared around, device backend should be able to correctly handle + * the lifetime of the stream. + */ + public native @ByVal Stream getNewStream(@ByVal Device arg0, int priority/*=0*/); + public native @ByVal Stream getNewStream(@ByVal Device arg0); + /** * Set a stream to be the thread local current stream for its device. * Return the previous stream for that device. You are NOT required @@ -168,6 +177,12 @@ public native void record( */ public native void synchronizeStream(@Const @ByRef Stream arg0); + /** + * Wait (by blocking the calling thread) until all the work previously + * recorded on the event has completed running on the device. + */ + public native void synchronizeEvent(Pointer arg0); + /** * Ensure the caching allocator (if any) is aware that the given DataPtr is * being used on the given stream, and that it should thus avoid recycling the @@ -175,6 +190,14 @@ public native void record( */ public native void recordDataPtrOnStream(@StdMove DataPtr arg0, @Const @ByRef Stream arg1); + /** + * Fetch the elapsed time between two recorded events. + */ + public native double elapsedTime( + Pointer arg0, + Pointer arg1, + @Cast("const c10::DeviceIndex") byte arg2); + /** * Intended use of this class is to leak the DeviceGuardImpl at program end. * So you better not call the destructor, buster! diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceGuardImplRegistrar.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceGuardImplRegistrar.java index 8b6612ce82d..3771529a6a1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceGuardImplRegistrar.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceGuardImplRegistrar.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceObjType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceObjType.java index a34c7694173..48d80783cff 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceObjType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceObjType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceObjTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceObjTypePtr.java index 850ca373b48..6e2f11167a1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceObjTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceObjTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceOptional.java index 2c0e86d6f8c..d90daa052a1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DeviceOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceTypeOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceTypeOptional.java index fe724392c22..b64b1bfcaa6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceTypeOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceTypeOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DeviceTypeOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceTypeSet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceTypeSet.java index c01b8ab4756..5a8f6189339 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceTypeSet.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DeviceTypeSet.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DictComp.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DictComp.java index a80f1616258..f251e2e0546 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DictComp.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DictComp.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class DictComp extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public DictComp(Pointer p) { super(p); } - public DictComp(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public DictComp(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr key(); public native @ByVal Expr value(); public native @ByVal Expr target(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DictLiteral.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DictLiteral.java index ff431466229..33ece80004e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DictLiteral.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DictLiteral.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class DictLiteral extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public DictLiteral(Pointer p) { super(p); } - public DictLiteral(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public DictLiteral(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal ExprList key_inputs(); public native @ByVal ExprList value_inputs(); public static native @ByVal DictLiteral create( diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DictType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DictType.java index 0f0e920b918..4252122baed 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DictType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DictType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DimVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DimVector.java index 8b7a4992184..65eab5710a5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DimVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DimVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DimVectorInferExpandGeometryResult.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DimVectorInferExpandGeometryResult.java index 6ae1dc877f1..24bfa033c5d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DimVectorInferExpandGeometryResult.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DimVectorInferExpandGeometryResult.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DimVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DimVectorOptional.java index ba737dc1e7e..5a58e3f68e2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DimVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DimVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DimVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Dimname.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Dimname.java index d1aeb489911..f66aa23ff02 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Dimname.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Dimname.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameArrayRef.java index 040428171f0..a3999fbd22d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameListOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameListOptional.java index 88cb97f7484..39781755be9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameListOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameListOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DimnameListOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameOptional.java index 979cb729d39..d0b3cf4bbda 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DimnameOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameVector.java index 2cc858d7c97..0646108b80e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DimnameVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DisablePythonDispatcher.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DisablePythonDispatcher.java index c700ecc78fc..8567d35b7e1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DisablePythonDispatcher.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DisablePythonDispatcher.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DisableRecordFunctionGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DisableRecordFunctionGuard.java index d17ca88d144..dc445aba045 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DisableRecordFunctionGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DisableRecordFunctionGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DisabledStr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DisabledStr.java index a8d294cb134..9a4d3f2b84e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DisabledStr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DisabledStr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeyExtractor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeyExtractor.java index 4829954eb86..e9b7fc9af37 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeyExtractor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeyExtractor.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeyOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeyOptional.java index 6aa7c050bd0..ab977833a1b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeyOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeyOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DispatchKeyOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeySet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeySet.java index 9c6dc5d6605..fcc2916b215 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeySet.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DispatchKeySet.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Dispatcher.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Dispatcher.java index 02be66397b7..a0453e2d25c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Dispatcher.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Dispatcher.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -129,18 +130,18 @@ public class Dispatcher extends Pointer { public native @ByVal RegistrationHandleRAII registerImpl(@ByVal OperatorName op_name, @ByVal DispatchKeyOptional dispatch_key, @ByVal KernelFunction kernel, @ByVal CppSignatureOptional cpp_signature, @UniquePtr @ByVal FunctionSchema inferred_function_schema, @StdString String debug); /** - * Given an operator, tells the Dispatcher that we have implemented an abstract impl + * Given an operator, tells the Dispatcher that we have implemented a fake impl * for this op in the given Python module. Call this a "pystub". */ - public native @ByVal RegistrationHandleRAII registerAbstractImplPyStub(@Const @ByRef OperatorName op_name, @Cast("const char*") BytePointer pymodule, @Cast("const char*") BytePointer context); - public native @ByVal RegistrationHandleRAII registerAbstractImplPyStub(@Const @ByRef OperatorName op_name, String pymodule, String context); + public native @ByVal RegistrationHandleRAII registerPythonModule(@Const @ByRef OperatorName op_name, @Cast("const char*") BytePointer pymodule, @Cast("const char*") BytePointer context); + public native @ByVal RegistrationHandleRAII registerPythonModule(@Const @ByRef OperatorName op_name, String pymodule, String context); /** - * Given an operator, throws if we have an abstract impl pystub. + * Given an operator, throws if we have a pystub. */ - public native void throwIfHasAbstractImplPyStub(@ByVal OperatorName op_name); + public native void throwIfHasPythonModule(@ByVal OperatorName op_name); - public native @ByVal BytePointerPairOptional getAbstractImplPyStub(@ByVal OperatorName op_name); + public native @ByVal BytePointerPairOptional getPyStub(@ByVal OperatorName op_name); /** * Register a new operator by name. diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DistStoreError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DistStoreError.java deleted file mode 100644 index 9966e069871..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DistStoreError.java +++ /dev/null @@ -1,29 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -// Used for errors originating from the store. -// These turn into DistStoreError when they cross into Python. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class DistStoreError extends DistError { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public DistStoreError(Pointer p) { super(p); } - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedBackend.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedBackend.java new file mode 100644 index 00000000000..2087980d594 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedBackend.java @@ -0,0 +1,280 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Name("c10d::Backend") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class DistributedBackend extends CustomClassHolder { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public DistributedBackend(Pointer p) { super(p); } + + // Backend Options is a base struct that defines the basic options + // when constructing a Backend. Each Backend subclass should + // extend this struct and define its options if it wants to provide more + // config options (beyond basic ones defined here) to end user. + @NoOffset public static class Options extends CustomClassHolder { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public Options(Pointer p) { super(p); } + + public Options( + @StdString BytePointer backend, + @ByVal(nullValue = "std::chrono::milliseconds(kBackendDefaultTimeout)") Milliseconds timeout) { super((Pointer)null); allocate(backend, timeout); } + private native void allocate( + @StdString BytePointer backend, + @ByVal(nullValue = "std::chrono::milliseconds(kBackendDefaultTimeout)") Milliseconds timeout); + public Options( + @StdString BytePointer backend) { super((Pointer)null); allocate(backend); } + private native void allocate( + @StdString BytePointer backend); + public Options( + @StdString String backend, + @ByVal(nullValue = "std::chrono::milliseconds(kBackendDefaultTimeout)") Milliseconds timeout) { super((Pointer)null); allocate(backend, timeout); } + private native void allocate( + @StdString String backend, + @ByVal(nullValue = "std::chrono::milliseconds(kBackendDefaultTimeout)") Milliseconds timeout); + public Options( + @StdString String backend) { super((Pointer)null); allocate(backend); } + private native void allocate( + @StdString String backend); + + public native @ByRef Milliseconds timeout(); public native Options timeout(Milliseconds setter); + + // backend name + // NOLINTNEXTLINE(cppcoreguidelines-avoid-const-or-ref-data-members) + @MemberGetter public native @StdString BytePointer backend(); + } + + public native int getRank(); + + public native int getSize(); + + // Returns an unique opaque ID of this backend that can be used to correlate + // with its collectives. + public native @Cast("int64_t") long getID(); + + public native @Cast("bool") boolean supportsSplitting(); + + public native void startCoalescing(); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work endCoalescing(); + + // Subclasses must override this method to return the backend name + public native @StdString BytePointer getBackendName(); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work broadcast( + @ByRef TensorVector arg0, + @Const @ByRef(nullValue = "c10d::BroadcastOptions()") BroadcastOptions arg1); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work broadcast( + @ByRef TensorVector arg0); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce( + @ByRef TensorVector arg0, + @Const @ByRef(nullValue = "c10d::AllreduceOptions()") AllreduceOptions arg1); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce( + @ByRef TensorVector arg0); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce_sparse( + @ByRef TensorVector arg0, + @Const @ByRef(nullValue = "c10d::AllreduceOptions()") AllreduceOptions arg1); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce_sparse( + @ByRef TensorVector arg0); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce_coalesced( + @ByRef TensorVector arg0, + @Const @ByRef(nullValue = "c10d::AllreduceCoalescedOptions()") AllreduceCoalescedOptions arg1); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce_coalesced( + @ByRef TensorVector arg0); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce( + @ByRef TensorVector arg0, + @Const @ByRef(nullValue = "c10d::ReduceOptions()") ReduceOptions arg1); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce( + @ByRef TensorVector arg0); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather( + @StdVector TensorVector arg0, + @ByRef TensorVector arg1, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions arg2); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather( + @StdVector TensorVector arg0, + @ByRef TensorVector arg1); + + // Gathers a single tensor inputBuffer into a single buffer outputBuffer that + // is interpreted as a contiguous collection of size inputBuffer * WORLD_SIZE. + // For implementers of ProcessGroup API and advanced users only. + // Note: this function will be deprecated in near future. + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _allgather_base( + @ByRef Tensor arg0, + @ByRef Tensor arg1, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions arg2); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _allgather_base( + @ByRef Tensor arg0, + @ByRef Tensor arg1); + + // This function is deprecated and will be moved out of Backend to comms: + // * do not add dependencies on this function, + // * do not implement it in your Backend, implement _allgather_base + // instead. + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_coalesced( + @StdVector TensorVector arg0, + @ByRef TensorVector arg1, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions arg2); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_coalesced( + @StdVector TensorVector arg0, + @ByRef TensorVector arg1); + + // This function is a coalesced version of `allgather_into_tensor` (currently + // still named as `_allgather_base`). Each tensor in the vector corresponds to + // an input/output of one `allgather_into_tensor` operation. + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_into_tensor_coalesced( + @ByRef TensorVector arg0, + @ByRef TensorVector arg1, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions arg2); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_into_tensor_coalesced( + @ByRef TensorVector arg0, + @ByRef TensorVector arg1); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work gather( + @StdVector TensorVector arg0, + @ByRef TensorVector arg1, + @Const @ByRef(nullValue = "c10d::GatherOptions()") GatherOptions arg2); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work gather( + @StdVector TensorVector arg0, + @ByRef TensorVector arg1); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work scatter( + @ByRef TensorVector arg0, + @StdVector TensorVector arg1, + @Const @ByRef(nullValue = "c10d::ScatterOptions()") ScatterOptions arg2); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work scatter( + @ByRef TensorVector arg0, + @StdVector TensorVector arg1); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter( + @ByRef TensorVector arg0, + @StdVector TensorVector arg1, + @Const @ByRef(nullValue = "c10d::ReduceScatterOptions()") ReduceScatterOptions arg2); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter( + @ByRef TensorVector arg0, + @StdVector TensorVector arg1); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _reduce_scatter_base( + @ByRef Tensor arg0, + @ByRef Tensor arg1, + @Const @ByRef(nullValue = "c10d::ReduceScatterOptions()") ReduceScatterOptions arg2); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _reduce_scatter_base( + @ByRef Tensor arg0, + @ByRef Tensor arg1); + + // This function is a coalesced version of `reduce_scatter_tensor` (currently + // still named as `_reduce_scatter_base`). Each tensor in the vector + // corresponds to an input/output of one `reduce_scatter_tensor` operation. + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter_tensor_coalesced( + @ByRef TensorVector arg0, + @ByRef TensorVector arg1, + @Const @ByRef(nullValue = "c10d::ReduceScatterOptions()") ReduceScatterOptions arg2); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter_tensor_coalesced( + @ByRef TensorVector arg0, + @ByRef TensorVector arg1); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work alltoall_base( + @ByRef Tensor arg0, + @ByRef Tensor arg1, + @Cast("std::vector*") @ByRef LongVector arg2, + @Cast("std::vector*") @ByRef LongVector arg3, + @Const @ByRef(nullValue = "c10d::AllToAllOptions()") AllToAllOptions arg4); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work alltoall_base( + @ByRef Tensor arg0, + @ByRef Tensor arg1, + @Cast("std::vector*") @ByRef LongVector arg2, + @Cast("std::vector*") @ByRef LongVector arg3); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work alltoall( + @ByRef TensorVector arg0, + @ByRef TensorVector arg1, + @Const @ByRef(nullValue = "c10d::AllToAllOptions()") AllToAllOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work alltoall( + @ByRef TensorVector arg0, + @ByRef TensorVector arg1); + + public native void monitoredBarrier( + @Const @ByRef BarrierOptions arg0, + @Cast("bool") boolean arg1/*=false*/); + public native void monitoredBarrier( + @Const @ByRef BarrierOptions arg0); + + // Agrees on an initial sequence number for the whole group by having rank 0 + // create it and broadcast it to other ranks using the store. Only implemented + // for GLOO and NCCL backends currently. + public native void setSequenceNumberForGroup(); + + // Retrieves the current sequence number for the whole group, which should be + // in sync. If the returned number is not consistent across the group, it + // may indicate that there is some sort of collective desynchronization. + public native @Cast("uint64_t") long getSequenceNumberForGroup(); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work send( + @ByRef TensorVector arg0, + int arg1, + int arg2); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work recv( + @ByRef TensorVector arg0, + int arg1, + int arg2); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work recvAnysource( + @ByRef TensorVector arg0, + int arg1); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work barrier( + @Const @ByRef(nullValue = "c10d::BarrierOptions()") BarrierOptions arg0); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work barrier(); + + public native void registerOnCompletionHook( + @ByRef(true) WorkInfoConsumer hook); + + public native void waitForPendingWorks(); + + public native void enableCollectivesTiming(); + + public native @Cast("bool") boolean hasHooks(); + + // Do not call this directly, use ProcessGroup::setGroupName instead. + public native void setGroupName(@StdString BytePointer name); + public native void setGroupName(@StdString String name); + + public native @StdString BytePointer getGroupName(); + + public native void setGroupDesc(@StdString BytePointer desc); + public native void setGroupDesc(@StdString String desc); + + public native @StdString BytePointer getGroupDesc(); + + // See similar functions in ProcessGroup.hpp for context. + public native @ByVal DeviceOptional getBoundDeviceId(); + + // Perform an eager connect to the specified device if the backend supports + // it. + public native void eagerConnectSingleDevice(@ByVal Device device); + + public native void setBoundDeviceId(@ByVal DeviceOptional device); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedBackendOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedBackendOptional.java new file mode 100644 index 00000000000..66bac114fee --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedBackendOptional.java @@ -0,0 +1,36 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class DistributedBackendOptional extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public DistributedBackendOptional(Pointer p) { super(p); } + public DistributedBackendOptional(@Cast({"", "c10::intrusive_ptr&"}) DistributedBackend value) { this(); put(value); } + public DistributedBackendOptional() { allocate(); } + private native void allocate(); + public native @Name("operator =") @ByRef DistributedBackendOptional put(@ByRef DistributedBackendOptional x); + + public native boolean has_value(); + public native void reset(); + public native @Name("value") @IntrusivePtr("c10d::Backend") @Cast({"", "c10::intrusive_ptr&"}) DistributedBackend get(); + @ValueSetter public native DistributedBackendOptional put(@IntrusivePtr("c10d::Backend") @Cast({"", "c10::intrusive_ptr&"}) DistributedBackend value); +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedBackendOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedBackendOptions.java new file mode 100644 index 00000000000..c295d6aff96 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedBackendOptions.java @@ -0,0 +1,46 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class DistributedBackendOptions extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public DistributedBackendOptions() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public DistributedBackendOptions(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public DistributedBackendOptions(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public DistributedBackendOptions position(long position) { + return (DistributedBackendOptions)super.position(position); + } + @Override public DistributedBackendOptions getPointer(long i) { + return new DistributedBackendOptions((Pointer)this).offsetAddress(i); + } + + public native @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store(); public native DistributedBackendOptions store(Store setter); + public native int group_rank(); public native DistributedBackendOptions group_rank(int setter); + public native int group_size(); public native DistributedBackendOptions group_size(int setter); + public native @ByRef SecondsFloat timeout(); public native DistributedBackendOptions timeout(SecondsFloat setter); + public native @StdString BytePointer group_id(); public native DistributedBackendOptions group_id(BytePointer setter); + public native @ByRef @Cast("std::vector*") LongVector global_ranks_in_group(); public native DistributedBackendOptions global_ranks_in_group(LongVector setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedRandomSampler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedRandomSampler.java index 28a34a9d70e..d20c7d8db7a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedRandomSampler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedRandomSampler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -42,7 +43,7 @@ private native void allocate( @Cast("size_t") long size); /** Resets the {@code DistributedRandomSampler} to a new set of indices. */ - public native void reset(@ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional new_size); + public native void reset(@ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional new_size); public native void reset(); /** Returns the next batch of indices. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedSampler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedSampler.java index ecdd52edd98..f4ce63357a5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedSampler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedSampler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedSequentialSampler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedSequentialSampler.java index 07ebbc7f716..0c175158825 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedSequentialSampler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DistributedSequentialSampler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -41,7 +42,7 @@ private native void allocate( @Cast("size_t") long size); /** Resets the {@code DistributedSequentialSampler} to a new set of indices. */ - public native void reset(@ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional new_size); + public native void reset(@ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional new_size); public native void reset(); /** Returns the next batch of indices. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Dots.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Dots.java index 8e3cadb236e..17ae7dcfdab 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Dots.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Dots.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ public class Dots extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Dots(Pointer p) { super(p); } - public Dots(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Dots(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public static native @ByVal Dots create(@Const @ByRef SourceRange range); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleArrayRef.java index 7ca9b4d0570..cdbe69c1bf3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleArrayRefOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleArrayRefOptional.java index e057fbcf6b1..1ed8b92e687 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleArrayRefOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleArrayRefOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DoubleArrayRefOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplex.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplex.java index 626e27952f3..90b0abdd395 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplex.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplex.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexArrayRef.java index ba8803704de..da0389cd1b3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexElementReference.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexElementReference.java index e310dff0489..500e9fe4dbf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexElementReference.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexElementReference.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ public class DoubleComplexElementReference extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public DoubleComplexElementReference(Pointer p) { super(p); } - public native @Name("operator std::conditional_t >::type>::value,const c10::complex&,c10::complex >") @ByVal DoubleComplex getDoubleComplex(); + public native @Name("operator std::conditional_t >::type>,const c10::complex&,c10::complex >") @ByVal DoubleComplex getDoubleComplex(); @@ -35,7 +36,7 @@ public class DoubleComplexElementReference extends Pointer { public native @Const @ByRef IValue get(); - private static native @Namespace void swap(@ByRef(true) DoubleComplexElementReference lhs, @ByRef(true) DoubleComplexElementReference rhs); + private static native @Namespace @NoException(true) void swap(@ByRef(true) DoubleComplexElementReference lhs, @ByRef(true) DoubleComplexElementReference rhs); public void swap(DoubleComplexElementReference rhs) { swap(this, rhs); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexList.java index 976c6baa857..83222e14d4e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -34,7 +35,7 @@ * and switch out the underlying list implementation without * breaking backwards compatibility for the kernel API. */ -@Name("c10::List >") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::List >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DoubleComplexList extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexListIterator.java index 097ff3b1fb8..fb303e4bf10 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleComplexListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleElementReference.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleElementReference.java index b50ba7021be..215fc9635e5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleElementReference.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleElementReference.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ public class DoubleElementReference extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public DoubleElementReference(Pointer p) { super(p); } - public native @Name("operator std::conditional_t::type>::value,const double&,double>") double getDouble(); + public native @Name("operator std::conditional_t::type>,const double&,double>") double getDouble(); @@ -35,7 +36,7 @@ public class DoubleElementReference extends Pointer { public native @Const @ByRef IValue get(); - private static native @Namespace void swap(@ByRef(true) DoubleElementReference lhs, @ByRef(true) DoubleElementReference rhs); + private static native @Namespace @NoException(true) void swap(@ByRef(true) DoubleElementReference lhs, @ByRef(true) DoubleElementReference rhs); public void swap(DoubleElementReference rhs) { swap(this, rhs); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleExpandingArrayOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleExpandingArrayOptional.java index d60fb062e8e..851378a3391 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleExpandingArrayOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleExpandingArrayOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DoubleExpandingArrayOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleList.java index b7435241b6e..59f7ec5e4a5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::List") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::List") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DoubleList extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleListIterator.java index a0aa8912f85..c05d4271682 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleOptional.java index 3401c413f0a..7f64db3bf88 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DoubleOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleVector.java index 9493bc6359b..6b76431a2bf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleVectorOptional.java index 65afa08f024..a89f4aa0563 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DoubleVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class DoubleVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImpl.java index 74303a0ce19..03f76be4cb5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dropout2d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies dropout over a 2-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Dropout2d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Dropout2d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::Dropout2dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImplBase.java index 3c313585b0e..564fe4c6b3c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImplCloneable.java index 2002aaf28d7..6069088dde1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class Dropout2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImpl.java index 1054388faea..52ecf18945e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dropout3d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies dropout over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Dropout3d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Dropout3d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::Dropout3dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImplBase.java index f44841f3784..4d113cc9ba6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImplCloneable.java index e351b4888c4..e45b7160813 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Dropout3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class Dropout3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutFuncOptions.java index c365357e174..69ca71c864f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImpl.java index a6385ff3d06..296ac39e8f2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dropout ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies dropout over a 1-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Dropout to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Dropout to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::DropoutOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImplBase.java index bec7b497bb7..97f1c7817a4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImplCloneable.java index 58ba40489a7..4a0a0ebfea8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class DropoutImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutOptions.java index 1301324ff98..f8b0f02f6aa 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DropoutOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DynamicLibrary.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DynamicLibrary.java index a7b641d7013..5e04728be02 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DynamicLibrary.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DynamicLibrary.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DynamoTensorArg.java b/pytorch/src/gen/java/org/bytedeco/pytorch/DynamoTensorArg.java new file mode 100644 index 00000000000..5ba68305bde --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/DynamoTensorArg.java @@ -0,0 +1,37 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Name("torch::dynamo::autograd::TensorArg") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class DynamoTensorArg extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public DynamoTensorArg(Pointer p) { super(p); } + + // Represents a de-duplicated tensor that will be passed into the graph + public DynamoTensorArg(@Cast("uint32_t") int i/*=0*/) { super((Pointer)null); allocate(i); } + private native void allocate(@Cast("uint32_t") int i/*=0*/); + public DynamoTensorArg() { super((Pointer)null); allocate(); } + private native void allocate(); + public native @Cast("uint32_t") int index(); + public native @Cast("bool") boolean defined(); + public native @Cast("uint32_t") int id(); public native DynamoTensorArg id(int setter); + public native @ByRef Tensor proxy_tensor(); public native DynamoTensorArg proxy_tensor(Tensor setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ELUImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ELUImpl.java index 179ef0707c6..21d25270217 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ELUImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ELUImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ELU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies elu over a given input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ELU to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.ELU to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::ELUOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ELUImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ELUImplCloneable.java index beb5a1a7dba..4bf29a55373 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ELUImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ELUImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ELUImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ELUOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ELUOptions.java index ed8f9223921..d970ca852ab 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ELUOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ELUOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Edge.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Edge.java index ca90e7ac0d9..f3320ff7cd3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Edge.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Edge.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EdgeVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EdgeVector.java index bdbd027698b..bec96c237be 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EdgeVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EdgeVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EllipsisIndexType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EllipsisIndexType.java index 21c2f74df2a..3042f3de906 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EllipsisIndexType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EllipsisIndexType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagFromPretrainedOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagFromPretrainedOptions.java index 269d14e79ca..2256e616f81 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagFromPretrainedOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagFromPretrainedOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagFuncOptions.java index bad4e4597c9..8990ce68628 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagImpl.java index 35275965833..246c5ae074e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -23,7 +24,7 @@ /** Computes sums or means of 'bags' of embeddings, without instantiating the * intermediate embeddings. - * See https://pytorch.org/docs/master/nn.html#torch.nn.EmbeddingBag to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.EmbeddingBag to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::EmbeddingBagOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagImplCloneable.java index a453b93f3e7..c94ed7aaf50 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class EmbeddingBagImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagMode.java index 7e90f5aa280..cf05a75f47d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagOptions.java index c2bcb58fada..5ef1e7812a9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingBagOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingFromPretrainedOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingFromPretrainedOptions.java index e8dee41ce6c..4ff3ab59ceb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingFromPretrainedOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingFromPretrainedOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingFuncOptions.java index 7ae841e4fb3..2d598d35162 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingImpl.java index 1ae10adff45..ba88f0bd894 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Performs a lookup in a fixed size embedding table. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Embedding to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Embedding to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::EmbeddingOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingImplCloneable.java index 50645956a6c..9d051b7ccef 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class EmbeddingImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingOptions.java index c26f344cf50..4c79b030752 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EmbeddingOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EnableProfilingGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EnableProfilingGuard.java index e7b2adf0f78..f5af9aded3b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EnableProfilingGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EnableProfilingGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EnabledStr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EnabledStr.java index 9888ea7dcea..d8705a0967f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EnabledStr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EnabledStr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EnforceFiniteError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EnforceFiniteError.java deleted file mode 100644 index db594231c95..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EnforceFiniteError.java +++ /dev/null @@ -1,29 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -// Used in ATen for non finite indices. These turn into -// ExitException when they cross to Python. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class EnforceFiniteError extends Error { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public EnforceFiniteError(Pointer p) { super(p); } - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EnumHolder.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EnumHolder.java index c93e6c76e0e..3a06add40a3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EnumHolder.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EnumHolder.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,9 +26,9 @@ public class EnumHolder extends Pointer { public EnumHolder(Pointer p) { super(p); } public EnumHolder(@SharedPtr EnumType type, @StdString BytePointer name, @ByVal IValue value) { super((Pointer)null); allocate(type, name, value); } - private native void allocate(@SharedPtr EnumType type, @StdString BytePointer name, @ByVal IValue value); + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(@SharedPtr EnumType type, @StdString BytePointer name, @ByVal IValue value); public EnumHolder(@SharedPtr EnumType type, @StdString String name, @ByVal IValue value) { super((Pointer)null); allocate(type, name, value); } - private native void allocate(@SharedPtr EnumType type, @StdString String name, @ByVal IValue value); + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(@SharedPtr EnumType type, @StdString String name, @ByVal IValue value); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EnumHolderPtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EnumHolderPtr.java deleted file mode 100644 index 69763f3177b..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EnumHolderPtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class EnumHolderPtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public EnumHolderPtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public EnumHolderPtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public EnumHolderPtr position(long position) { - return (EnumHolderPtr)super.position(position); - } - @Override public EnumHolderPtr getPointer(long i) { - return new EnumHolderPtr((Pointer)this).offsetAddress(i); - } - - - public EnumHolderPtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public EnumHolderPtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public EnumHolderPtr(EnumHolder target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(EnumHolder target, @ByVal DontIncreaseRefcount arg1); - - - - public EnumHolderPtr(@ByRef(true) EnumHolderPtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) EnumHolderPtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) EnumHolderPtr put(@ByRef(true) EnumHolderPtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) EnumHolder get(); - - public native @ByRef @Name("operator *") @NoException(true) EnumHolder multiply(); - - public native @Name("operator ->") @NoException(true) EnumHolder access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef EnumHolderPtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) EnumHolder release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal EnumHolderPtr reclaim(EnumHolder owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal EnumHolderPtr reclaim_copy(EnumHolder owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal EnumHolderPtr unsafe_steal_from_new(EnumHolder raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal EnumHolderPtr unsafe_adapt_non_heap_allocated( - EnumHolder raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal EnumHolderPtr unsafe_reclaim_from_nonowning(EnumHolder raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EnumNameValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EnumNameValue.java index 5add36fa810..137239f35ba 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EnumNameValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EnumNameValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EnumNameValueArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EnumNameValueArrayRef.java index c20bbde9aa0..500435a9071 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EnumNameValueArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EnumNameValueArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/EnumType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/EnumType.java index 99079ab9316..73067822048 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/EnumType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/EnumType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,6 +26,12 @@ public class EnumType extends NamedType { @MemberGetter public static native TypeKind Kind(); + public static native @SharedPtr EnumType create( + @Const @ByRef QualifiedName qualified_class_name, + @ByVal Type.TypePtr value, + @StdVector EnumNameValue enum_names_values, + @WeakPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu); + public native @StdString BytePointer str(); public native @StdString BytePointer repr_str(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Error.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Error.java deleted file mode 100644 index 387bf1d4910..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Error.java +++ /dev/null @@ -1,65 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -/** The primary ATen error class. - * Provides a complete error message with source location information via - * {@code what()}, and a more concise message via {@code what_without_backtrace()}. - * Don't throw this directly; use TORCH_CHECK/TORCH_INTERNAL_ASSERT instead. - * - * NB: c10::Error is handled specially by the default torch to suppress the - * backtrace, see torch/csrc/Exceptions.h */ -@Namespace("c10") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class Error extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public Error(Pointer p) { super(p); } - - // PyTorch-style Error constructor. NB: the implementation of this - // is actually in Logging.cpp - - // Caffe2-style error message - - // Base constructor - - // Add some new context to the message stack. The last added context - // will be formatted at the end of the context list upon printing. - // WARNING: This method is O(n) in the size of the stack, so don't go - // wild adding a ridiculous amount of context to error messages. - public native void add_context(@StdString BytePointer msg); - public native void add_context(@StdString String msg); - - public native @StdString BytePointer msg(); - - public native @Const @ByRef StringVector context(); - - public native @StdString BytePointer backtrace(); - - /** Returns the complete error message, including the source location. - * The returned pointer is invalidated if you call add_context() on - * this object. */ - public native @NoException(true) @Cast("const char*") BytePointer what(); - - public native @Const @NoException(true) Pointer caller(); - - /** Returns only the error message string, without source location. - * The returned pointer is invalidated if you call add_context() on - * this object. */ - public native @NoException(true) @Cast("const char*") BytePointer what_without_backtrace(); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ErrorAlwaysShowCppStacktrace.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ErrorAlwaysShowCppStacktrace.java deleted file mode 100644 index ab4dcba52f1..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ErrorAlwaysShowCppStacktrace.java +++ /dev/null @@ -1,29 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - // namespace WarningUtils - -// Like Error, but we always report the C++ backtrace, instead of only -// reporting when TORCH_SHOW_CPP_STACKTRACES -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class ErrorAlwaysShowCppStacktrace extends Error { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public ErrorAlwaysShowCppStacktrace(Pointer p) { super(p); } - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ErrorReport.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ErrorReport.java deleted file mode 100644 index e1d4147586d..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ErrorReport.java +++ /dev/null @@ -1,58 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Namespace("torch::jit") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class ErrorReport extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public ErrorReport(Pointer p) { super(p); } - - public ErrorReport(@Const @ByRef ErrorReport e) { super((Pointer)null); allocate(e); } - private native void allocate(@Const @ByRef ErrorReport e); - - public ErrorReport(@ByVal SourceRange r) { super((Pointer)null); allocate(r); } - private native void allocate(@ByVal SourceRange r); - public ErrorReport(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); - public ErrorReport(@Const @ByRef Token tok) { super((Pointer)null); allocate(tok); } - private native void allocate(@Const @ByRef Token tok); - - public native @NoException(true) @Cast("const char*") BytePointer what(); - - public static class CallStack extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public CallStack(Pointer p) { super(p); } - - // These functions are used to report why a function was being compiled - // (i.e. what was the call stack of user functions at compilation time that - // led to this error) - public CallStack(@StdString BytePointer name, @Const @ByRef SourceRange range) { super((Pointer)null); allocate(name, range); } - private native void allocate(@StdString BytePointer name, @Const @ByRef SourceRange range); - public CallStack(@StdString String name, @Const @ByRef SourceRange range) { super((Pointer)null); allocate(name, range); } - private native void allocate(@StdString String name, @Const @ByRef SourceRange range); - - // Change the range that is relevant for the current function (i.e. after - // each successful expression compilation, change it to the next expression) - public static native void update_pending_range(@Const @ByRef SourceRange range); - } - - public static native @StdString BytePointer current_call_stack(); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Example.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Example.java index cd956c142c9..48f7d498f65 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Example.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Example.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleCollation.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleCollation.java index 7fd3ffb9c25..a2938d615b8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleCollation.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleCollation.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleIterator.java index 6ac7af85fdc..55747527dbb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleOptional.java index 27c991fb625..c88bebe5404 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ExampleOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVector.java index 3ccbaf22ff1..ecca92ef0f5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVectorIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVectorIterator.java index 719e4936861..0955aa31c64 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVectorIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVectorIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVectorOptional.java index f05953814d1..12e04d4c83e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExampleVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional > >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional > >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ExampleVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExceptionMessageValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExceptionMessageValue.java index 71623f605e9..403789c45cb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExceptionMessageValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExceptionMessageValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExceptionValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExceptionValue.java index c03389fddae..02cf25f6abc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExceptionValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExceptionValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExecutionPlan.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExecutionPlan.java index b1ce937d048..264536a6e40 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExecutionPlan.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExecutionPlan.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExecutorExecutionModeOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExecutorExecutionModeOptional.java index b8d26483818..bbef4475b43 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExecutorExecutionModeOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExecutorExecutionModeOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ExecutorExecutionModeOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExperimentalConfig.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExperimentalConfig.java index 6fdfd2ce0e6..64f7ab6a0bc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExperimentalConfig.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExperimentalConfig.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Expr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Expr.java index 1c65d0402d5..fea399b18a6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Expr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Expr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,6 +25,6 @@ public class Expr extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Expr(Pointer p) { super(p); } - public Expr(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Expr(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExprList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExprList.java index 8b1650910c3..6391f8ce843 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExprList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExprList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class ExprList extends TreeView { public ExprList(Pointer p) { super(p); } - public ExprList(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public ExprList(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal @Cast("torch::jit::List::iterator*") ExprListIterator begin(); public native @ByVal @Cast("torch::jit::List::iterator*") ExprListIterator end(); public native @Cast("bool") boolean empty(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExprListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExprListIterator.java index 5ee012a0567..dfe0fe4b9fb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExprListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExprListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class ExprListIterator extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public ExprListIterator(Pointer p) { super(p); } - public ExprListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it) { super((Pointer)null); allocate(it); } - private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it); + public ExprListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it) { super((Pointer)null); allocate(it); } + private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it); public native @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef ExprListIterator rhs); public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef ExprListIterator rhs); public native @ByVal @Name("operator *") Expr multiply(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExprMaybe.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExprMaybe.java index 99c13ec7bdc..1354028055e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExprMaybe.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExprMaybe.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class ExprMaybe extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public ExprMaybe(Pointer p) { super(p); } - public ExprMaybe(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public ExprMaybe(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); /* implicit */ public ExprMaybe(@Const @ByRef Expr tree) { super((Pointer)null); allocate(tree); } private native void allocate(@Const @ByRef Expr tree); public native @Cast("bool") boolean present(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExprStmt.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExprStmt.java index fbc74c82fb4..b2cb516af0f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExprStmt.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExprStmt.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class ExprStmt extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public ExprStmt(Pointer p) { super(p); } - public ExprStmt(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public ExprStmt(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr expr(); public static native @ByVal ExprStmt create(@Const @ByRef SourceRange range, @Const @ByRef Expr list); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ExtraFilesMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ExtraFilesMap.java index cf0b84c1bff..858e0695e72 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ExtraFilesMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ExtraFilesMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FanModeType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FanModeType.java index 0a3ca7649f7..2da3e3a502a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FanModeType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FanModeType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutFuncOptions.java index b2d9e41ddf2..61195a018e9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImpl.java index 16ca75de041..6c2e4da6ac7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImplBase.java index 33a4d40aa2c..ea39419697c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImplCloneable.java index 978d590532d..fdc119d70d6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FeatureAlphaDropoutImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class FeatureAlphaDropoutImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FileLineFunc.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FileLineFunc.java index 5d857d712fe..01f8352d46a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FileLineFunc.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FileLineFunc.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenImpl.java index f289d677db2..df91eecf4e6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Flatten ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** A placeholder for Flatten operator - * See https://pytorch.org/docs/master/generated/torch.nn.Flatten.html to learn + * See https://pytorch.org/docs/main/generated/torch.nn.Flatten.html to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::FlattenOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenImplCloneable.java index e86931bbce1..ba5fecfc3fc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class FlattenImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenOptions.java index 2aa758f331f..57bdd02823d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FlattenOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e4m3fn.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e4m3fn.java index 41f33f9683c..12d3a3fd0c5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e4m3fn.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e4m3fn.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e4m3fnuz.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e4m3fnuz.java index 6bbbdaca43a..6f4406a12b5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e4m3fnuz.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e4m3fnuz.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e5m2.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e5m2.java index a644d80cb21..5f3bbe1dc2e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e5m2.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e5m2.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e5m2fnuz.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e5m2fnuz.java index 8d7862155bb..fc2c1a87d6d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e5m2fnuz.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Float8_e5m2fnuz.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatArrayRef.java index 8b7e990794f..46ea6eee7e6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatComplex.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatComplex.java index 2688414f17e..c397a4c4e9e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatComplex.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatComplex.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatComplexArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatComplexArrayRef.java index a45083e7b55..5d323268b0c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatComplexArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatComplexArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatOptional.java index a145e36197b..25d87019056 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class FloatOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatType.java index ba7faf2d5fc..5fa255838ee 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatTypePtr.java index e31153b746b..0caf191030a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FloatTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FloatTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FoldImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FoldImpl.java index d8eafe57d2b..c706a5b2d7b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FoldImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FoldImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,12 +13,14 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; /** Applies fold over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Fold to learn about + * See https://pytorch.org/docs/main/nn.html#torch.nn.Fold to learn about * the exact behavior of this module. * * See the documentation for {@code torch::nn::FoldOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FoldImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FoldImplCloneable.java index d5e1ffc32fb..6f9386cd223 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FoldImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FoldImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class FoldImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FoldOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FoldOptions.java index 92de1c40b93..a5823a02670 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FoldOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FoldOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/For.java b/pytorch/src/gen/java/org/bytedeco/pytorch/For.java index 76149ae488b..c1ac6a01b0a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/For.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/For.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class For extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public For(Pointer p) { super(p); } - public For(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public For(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal ExprList targets(); public native @ByVal ExprList itrs(); public native @ByVal StmtList body(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ForceDispatchKeyGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ForceDispatchKeyGuard.java index d4c24a2a5ef..6448ee458e9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ForceDispatchKeyGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ForceDispatchKeyGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -23,7 +24,18 @@ public class ForceDispatchKeyGuard extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public ForceDispatchKeyGuard(Pointer p) { super(p); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public ForceDispatchKeyGuard(long size) { super((Pointer)null); allocateArray(size); } + private native void allocateArray(long size); + @Override public ForceDispatchKeyGuard position(long position) { + return (ForceDispatchKeyGuard)super.position(position); + } + @Override public ForceDispatchKeyGuard getPointer(long i) { + return new ForceDispatchKeyGuard((Pointer)this).offsetAddress(i); + } + public ForceDispatchKeyGuard() { super((Pointer)null); allocate(); } + private native void allocate(); public ForceDispatchKeyGuard(@ByVal LocalDispatchKeySet key_set) { super((Pointer)null); allocate(key_set); } private native void allocate(@ByVal LocalDispatchKeySet key_set); public ForceDispatchKeyGuard( diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ForwardADLevel.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ForwardADLevel.java index 28da57541b8..8353a80502e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ForwardADLevel.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ForwardADLevel.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ForwardGrad.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ForwardGrad.java index 4ad1b8ac48b..65074d1f2ef 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ForwardGrad.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ForwardGrad.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool1dOptions.java index c040ba6cdf2..3e20584d2a9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional @@ -30,7 +31,7 @@ public class FractionalMaxPool1dOptions extends Pointer { public FractionalMaxPool1dOptions(@ByVal @Cast("torch::ExpandingArray<1>*") LongPointer kernel_size) { super((Pointer)null); allocate(kernel_size); } private native void allocate(@ByVal @Cast("torch::ExpandingArray<1>*") LongPointer kernel_size); public native @Cast("torch::ExpandingArray<1>*") @ByRef @NoException(true) LongPointer kernel_size(); - public native @Cast("c10::optional >*") @ByRef @NoException(true) LongExpandingArrayOptional output_size(); - public native @Cast("c10::optional::ExpandingArrayDouble>*") @ByRef @NoException(true) DoubleExpandingArrayOptional output_ratio(); + public native @Cast("std::optional >*") @ByRef @NoException(true) LongExpandingArrayOptional output_size(); + public native @Cast("std::optional::ExpandingArrayDouble>*") @ByRef @NoException(true) DoubleExpandingArrayOptional output_ratio(); public native @ByRef @NoException(true) Tensor _random_samples(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dImpl.java index 1ec06789b17..c55dc399d3c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies fractional maxpool over a 2-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.FractionalMaxPool2d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.FractionalMaxPool2d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::FractionalMaxPool2dOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dImplCloneable.java index 46e7d4e434b..bac58df5b66 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class FractionalMaxPool2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dOptions.java index 3f28dbbc469..d414d5144cd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -26,7 +27,7 @@ public class FractionalMaxPool2dOptions extends Pointer { public FractionalMaxPool2dOptions(@ByVal @Cast("torch::ExpandingArray<2>*") LongPointer kernel_size) { super((Pointer)null); allocate(kernel_size); } private native void allocate(@ByVal @Cast("torch::ExpandingArray<2>*") LongPointer kernel_size); public native @Cast("torch::ExpandingArray<2>*") @ByRef @NoException(true) LongPointer kernel_size(); - public native @Cast("c10::optional >*") @ByRef @NoException(true) LongExpandingArrayOptional output_size(); - public native @Cast("c10::optional::ExpandingArrayDouble>*") @ByRef @NoException(true) DoubleExpandingArrayOptional output_ratio(); + public native @Cast("std::optional >*") @ByRef @NoException(true) LongExpandingArrayOptional output_size(); + public native @Cast("std::optional::ExpandingArrayDouble>*") @ByRef @NoException(true) DoubleExpandingArrayOptional output_ratio(); public native @ByRef @NoException(true) Tensor _random_samples(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dImpl.java index aa0e7510d90..3d45b92fd2c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies fractional maxpool over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.FractionalMaxPool3d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.FractionalMaxPool3d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::FractionalMaxPool3dOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dImplCloneable.java index 96858a56e24..024f8a373f8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class FractionalMaxPool3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dOptions.java index 47718860d4a..0b1961a5c51 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FractionalMaxPool3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -26,7 +27,7 @@ public class FractionalMaxPool3dOptions extends Pointer { public FractionalMaxPool3dOptions(@ByVal @Cast("torch::ExpandingArray<3>*") LongPointer kernel_size) { super((Pointer)null); allocate(kernel_size); } private native void allocate(@ByVal @Cast("torch::ExpandingArray<3>*") LongPointer kernel_size); public native @Cast("torch::ExpandingArray<3>*") @ByRef @NoException(true) LongPointer kernel_size(); - public native @Cast("c10::optional >*") @ByRef @NoException(true) LongExpandingArrayOptional output_size(); - public native @Cast("c10::optional::ExpandingArrayDouble>*") @ByRef @NoException(true) DoubleExpandingArrayOptional output_ratio(); + public native @Cast("std::optional >*") @ByRef @NoException(true) LongExpandingArrayOptional output_size(); + public native @Cast("std::optional::ExpandingArrayDouble>*") @ByRef @NoException(true) DoubleExpandingArrayOptional output_ratio(); public native @ByRef @NoException(true) Tensor _random_samples(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FullDataLoaderOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FullDataLoaderOptions.java index 3225364ede7..c76972b2bff 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FullDataLoaderOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FullDataLoaderOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -36,7 +37,7 @@ public class FullDataLoaderOptions extends Pointer { public native @Cast("size_t") long batch_size(); public native FullDataLoaderOptions batch_size(long setter); public native @Cast("size_t") long workers(); public native FullDataLoaderOptions workers(long setter); public native @Cast("size_t") long max_jobs(); public native FullDataLoaderOptions max_jobs(long setter); - public native @ByRef @Cast("c10::optional*") Pointer timeout(); public native FullDataLoaderOptions timeout(Pointer setter); + public native @Optional Milliseconds timeout(); public native FullDataLoaderOptions timeout(Milliseconds setter); public native @Cast("bool") boolean enforce_ordering(); public native FullDataLoaderOptions enforce_ordering(boolean setter); public native @Cast("bool") boolean drop_last(); public native FullDataLoaderOptions drop_last(boolean setter); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FuncTorchTLSBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FuncTorchTLSBase.java index 6656758dda0..1bbbe67f371 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FuncTorchTLSBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FuncTorchTLSBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Function.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Function.java index 734549bde4c..6a8f4c12214 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Function.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Function.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -35,17 +36,14 @@ public class Function extends Pointer { public native void run(@ByRef IValueVector stack); - public native @ByVal FuturePtr runAsync( + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future runAsync( @ByRef IValueVector arg0, @ByVal(nullValue = "torch::jit::TaskLauncher(at::launch)") @Cast("torch::jit::TaskLauncher*") Pointer taskLauncher); - public native @ByVal FuturePtr runAsync( + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future runAsync( @ByRef IValueVector arg0); - public native @ByVal @Name("operator ()") IValue apply( - @ByVal IValueVector stack, - @Cast("const torch::jit::Kwargs*") @ByRef(nullValue = "torch::jit::Kwargs()") StringIValueMap kwargs); - public native @ByVal @Name("operator ()") IValue apply( - @ByVal IValueVector stack); + public native @ByVal @Name("operator ()") IValue apply(@ByVal IValueVector stack, @Cast("const torch::jit::Kwargs*") @ByRef(nullValue = "torch::jit::Kwargs()") StringIValueMap kwargs); + public native @ByVal @Name("operator ()") IValue apply(@ByVal IValueVector stack); public native @Const @ByRef QualifiedName qualname(); @@ -71,7 +69,8 @@ public class Function extends Pointer { // If call() returns true, then callback completes successfully, otherwise // call() returns false. - // Overload for server interpreter, a bailout size is needed for graph executor. + // Overload for server interpreter, a bailout size is needed for graph + // executor. // Overload for mobile interpreter. diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionCrossMapLRN2d.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionCrossMapLRN2d.java index 60ce6592285..a7d27c6917c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionCrossMapLRN2d.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionCrossMapLRN2d.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ * {@code forward} can take as many arguments as you want and should return either a * variable list or a Variable. Use of any direct Variable arguments will be * registered in the graph but no vectors/sets or any other data structures - * will be traversed. You can use c10::optional as one of the arguments + * will be traversed. You can use std::optional as one of the arguments * and it will be registered as a variable in the graph if the argument has a * value. It should take a pointer to {@code torch::autograd::AutogradContext} as the * first argument. Variables can be saved in the {@code ctx} using @@ -42,11 +43,16 @@ * {@code ctx->get_saved_variables} (see * {@code torch::autograd::AutogradContext::get_saved_variables}) and other saved * data can be accessed from {@code ctx->saved_data}. + * To enable compiled autograd support (torch.compile for backward) for your + * custom autograd operation, you can set MyFunction::is_traceable + * (see Function::istraceable notes below). * * For example: *
{@code
  *  class MyFunction : public Function {
  *    public:
+ *    static constexpr bool is_traceable = true;
+ * 
  *    static variable_list forward(AutogradContext *ctx, int n, Variable var) {
  *       // Save data for backward in context
  *       ctx->saved_data["n"] = n;
@@ -95,4 +101,14 @@ public class FunctionCrossMapLRN2d extends Pointer {
   // is not declared yet.
   // The enable_if check is to ensure that the user doesn't explicitly provide
   // the parameter X.
+
+  // This flag is for an experimental feature: compiled autograd. Not all
+  // built-in APIs are supported at the moment e.g. mark_dirty and
+  // mark_non_differentiable. Before setting this flag to enable tracing for
+  // your custom function , you need to ensure that the backward function is
+  // traceable i.e. any variables accessed in the backward other than the input
+  // arguments must be handled in a similar manner to built-ins in
+  // CppNode::compiled_args and CppNode::apply_with_saved.
+  @MemberGetter public static native @Cast("const bool") boolean is_traceable();
+  public static final boolean is_traceable = is_traceable();
 }
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPostHook.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPostHook.java
index c6810f6921a..839bad06ea7 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPostHook.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPostHook.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPostHookVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPostHookVector.java
index 1c5e670261e..1d3dc5bc55b 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPostHookVector.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPostHookVector.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPreHook.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPreHook.java
index ebc7e2bb154..4ca9adc84ed 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPreHook.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPreHook.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPreHookVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPreHookVector.java
index f815d7233ab..b60902710a2 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPreHookVector.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionPreHookVector.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchema.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchema.java
index 372aa10cb6f..cd220694cc9 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchema.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchema.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
@@ -229,16 +230,16 @@ public FunctionSchema(
   public native @StdString BytePointer formatTypeMismatchMsg(
         @Const @ByRef Argument expected,
         @StdString BytePointer actual_type,
-        @ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional _position,
-        @ByVal(nullValue = "c10::optional(c10::nullopt)") StringOptional value);
+        @ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional _position,
+        @ByVal(nullValue = "std::optional(c10::nullopt)") StringOptional value);
   public native @StdString BytePointer formatTypeMismatchMsg(
         @Const @ByRef Argument expected,
         @StdString BytePointer actual_type);
   public native @StdString String formatTypeMismatchMsg(
         @Const @ByRef Argument expected,
         @StdString String actual_type,
-        @ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional _position,
-        @ByVal(nullValue = "c10::optional(c10::nullopt)") StringOptional value);
+        @ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional _position,
+        @ByVal(nullValue = "std::optional(c10::nullopt)") StringOptional value);
   public native @StdString String formatTypeMismatchMsg(
         @Const @ByRef Argument expected,
         @StdString String actual_type);
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchemaOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchemaOptional.java
index 6f7d31b1d78..a948e616b76 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchemaOptional.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchemaOptional.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,10 +13,12 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
-@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
+@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class FunctionSchemaOptional extends Pointer {
     static { Loader.load(); }
     /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchemaVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchemaVector.java
index 80f050b3908..e34f290ab39 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchemaVector.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionSchemaVector.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionType.java
index be838df1422..e6a7898cd02 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionType.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionType.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionValue.java
index 6123302af43..d417db18fba 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionValue.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionValue.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionVector.java
index 8e77dd9b234..41dd056934f 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionVector.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionVector.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionalityOffsetAndMask.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionalityOffsetAndMask.java
index cdee8dad713..8203e09bfd8 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionalityOffsetAndMask.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FunctionalityOffsetAndMask.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FusionStrategy.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FusionStrategy.java
index 9a298133bf7..3eec91aa8e5 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FusionStrategy.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FusionStrategy.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Future.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Future.java
index 0171c07886d..6ab36a9fcc0 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/Future.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Future.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
  // namespace ivalue
@@ -30,32 +31,6 @@ public class Future extends Pointer {
   
   
 
-  @NoOffset public static class FutureError extends Pointer {
-      static { Loader.load(); }
-      /** Default native constructor. */
-      public FutureError() { super((Pointer)null); allocate(); }
-      /** Native array allocator. Access with {@link Pointer#position(long)}. */
-      public FutureError(long size) { super((Pointer)null); allocateArray(size); }
-      /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
-      public FutureError(Pointer p) { super(p); }
-      private native void allocate();
-      private native void allocateArray(long size);
-      @Override public FutureError position(long position) {
-          return (FutureError)super.position(position);
-      }
-      @Override public FutureError getPointer(long i) {
-          return new FutureError((Pointer)this).offsetAddress(i);
-      }
-  
-    
-
-    
-
-    public native @NoException(true) @Cast("const char*") BytePointer what();
-
-    public native @StdString BytePointer error_msg(); public native FutureError error_msg(BytePointer setter);
-  }
-
   /**
    * Wait on the future until it completes.
    */
@@ -79,7 +54,7 @@ public class Future extends Pointer {
    */
   public native void markCompleted(
         @ByVal IValue value,
-        @ByVal(nullValue = "c10::optional >(c10::nullopt)") WeakStorageVectorOptional storages);
+        @ByVal(nullValue = "std::optional > >(c10::nullopt)") WeakStorageVectorOptional storages);
   public native void markCompleted(
         @ByVal IValue value);
 
@@ -98,7 +73,7 @@ public native void markCompleted(
 
   // This accessor should only be used if we know that the future is
   // completed() with no error.
-  public native @StdVector WeakStorage storages();
+  public native @Const @ByRef WeakStorageVector storages();
 
   /**
    * Add a callback to the future.
@@ -133,5 +108,5 @@ public native void markCompleted(
 
   // This method should be used when one intends to manually create a child
   // future, for example when implementing a customized version of then().
-  public native @ByVal FuturePtr createInstance(@ByVal Type.TypePtr type);
+  public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future createInstance(@ByVal Type.TypePtr type);
 }
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureArrayRef.java
similarity index 56%
rename from pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrArrayRef.java
rename to pytorch/src/gen/java/org/bytedeco/pytorch/FutureArrayRef.java
index ad0b4d9a057..d5d501291a1 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrArrayRef.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureArrayRef.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,29 +13,31 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
 @Name("c10::ArrayRef >") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
-public class FuturePtrArrayRef extends Pointer {
+public class FutureArrayRef extends Pointer {
     static { Loader.load(); }
     /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
-    public FuturePtrArrayRef(Pointer p) { super(p); }
+    public FutureArrayRef(Pointer p) { super(p); }
     /** Native array allocator. Access with {@link Pointer#position(long)}. */
-    public FuturePtrArrayRef(long size) { super((Pointer)null); allocateArray(size); }
+    public FutureArrayRef(long size) { super((Pointer)null); allocateArray(size); }
     private native void allocateArray(long size);
-    @Override public FuturePtrArrayRef position(long position) {
-        return (FuturePtrArrayRef)super.position(position);
+    @Override public FutureArrayRef position(long position) {
+        return (FutureArrayRef)super.position(position);
     }
-    @Override public FuturePtrArrayRef getPointer(long i) {
-        return new FuturePtrArrayRef((Pointer)this).offsetAddress(i);
+    @Override public FutureArrayRef getPointer(long i) {
+        return new FutureArrayRef((Pointer)this).offsetAddress(i);
     }
 
   /** \name Constructors
    *  \{
    

* Construct an empty ArrayRef. */ - /* implicit */ public FuturePtrArrayRef() { super((Pointer)null); allocate(); } + /* implicit */ public FutureArrayRef() { super((Pointer)null); allocate(); } private native void allocate(); /** Construct an ArrayRef from a single element. */ @@ -44,12 +45,12 @@ public class FuturePtrArrayRef extends Pointer { /** Construct an ArrayRef from a pointer and length. */ - public FuturePtrArrayRef(@Const FuturePtr data, @Cast("size_t") long length) { super((Pointer)null); allocate(data, length); } - private native void allocate(@Const FuturePtr data, @Cast("size_t") long length); + public FutureArrayRef(@Const @IntrusivePtr("c10::ivalue::Future") Future data, @Cast("size_t") long length) { super((Pointer)null); allocate(data, length); } + private native void allocate(@Const @IntrusivePtr("c10::ivalue::Future") Future data, @Cast("size_t") long length); /** Construct an ArrayRef from a range. */ - public FuturePtrArrayRef(@Const FuturePtr begin, @Const FuturePtr end) { super((Pointer)null); allocate(begin, end); } - private native void allocate(@Const FuturePtr begin, @Const FuturePtr end); + public FutureArrayRef(@Const @IntrusivePtr("c10::ivalue::Future") Future begin, @Const @IntrusivePtr("c10::ivalue::Future") Future end) { super((Pointer)null); allocate(begin, end); } + private native void allocate(@Const @IntrusivePtr("c10::ivalue::Future") Future begin, @Const @IntrusivePtr("c10::ivalue::Future") Future end); /** Construct an ArrayRef from a SmallVector. This is templated in order to * avoid instantiating SmallVectorTemplateCommon whenever we @@ -59,6 +60,8 @@ public class FuturePtrArrayRef extends Pointer { // The enable_if stuff here makes sure that this isn't used for // std::vector, because ArrayRef can't work on a std::vector // bitfield. + public FutureArrayRef(@ByRef FutureVector vec) { super((Pointer)null); allocate(vec); } + private native void allocate(@ByRef FutureVector vec); /** Construct an ArrayRef from a std::array */ @@ -71,46 +74,46 @@ public class FuturePtrArrayRef extends Pointer { * \name Simple Operations * \{ */ - public native @Const @ByPtr FuturePtr begin(); - public native @Const @ByPtr FuturePtr end(); + public native @Const @ByPtr Future begin(); + public native @Const @ByPtr Future end(); // These are actually the same as iterator, since ArrayRef only // gives you const iterators. - public native @Const @ByPtr FuturePtr cbegin(); - public native @Const @ByPtr FuturePtr cend(); + public native @Const @ByPtr Future cbegin(); + public native @Const @ByPtr Future cend(); /** empty - Check if the array is empty. */ public native @Cast("const bool") boolean empty(); - public native @Const FuturePtr data(); + public native @Const @IntrusivePtr("c10::ivalue::Future") Future data(); /** size - Get the array size. */ public native @Cast("const size_t") long size(); /** front - Get the first element. */ - public native @Const @ByRef FuturePtr front(); + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future front(); /** back - Get the last element. */ - public native @Const @ByRef FuturePtr back(); + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future back(); /** equals - Check for element-wise equality. */ - public native @Cast("const bool") boolean equals(@ByVal FuturePtrArrayRef RHS); + public native @Cast("const bool") boolean equals(@ByVal FutureArrayRef RHS); /** slice(n, m) - Take M elements of the array starting at element N */ - public native @Const @ByVal FuturePtrArrayRef slice(@Cast("size_t") long N, @Cast("size_t") long M); + public native @Const @ByVal FutureArrayRef slice(@Cast("size_t") long N, @Cast("size_t") long M); /** slice(n) - Chop off the first N elements of the array. */ - public native @Const @ByVal FuturePtrArrayRef slice(@Cast("size_t") long N); + public native @Const @ByVal FutureArrayRef slice(@Cast("size_t") long N); /** \} * \name Operator Overloads * \{ */ - public native @Const @ByRef @Name("operator []") FuturePtr get(@Cast("size_t") long Index); + public native @IntrusivePtr("c10::ivalue::Future") @Name("operator []") @Cast({"", "c10::intrusive_ptr&"}) Future get(@Cast("size_t") long Index); /** Vector compatibility */ /// - public native @Const @ByRef FuturePtr at(@Cast("size_t") long Index); + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future at(@Cast("size_t") long Index); /** Disallow accidental assignment from a temporary. * @@ -127,7 +130,7 @@ public class FuturePtrArrayRef extends Pointer { /** \} * \name Expensive Operations * \{ */ - public native @StdVector FuturePtr vec(); + public native @ByVal FutureVector vec(); /** \} */ } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrElementReference.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureElementReference.java similarity index 61% rename from pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrElementReference.java rename to pytorch/src/gen/java/org/bytedeco/pytorch/FutureElementReference.java index e36923471fc..0a5ce9a85e6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrElementReference.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureElementReference.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,17 +13,19 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @Name("c10::impl::ListElementReference,c10::detail::ListImpl::list_type::iterator>") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class FuturePtrElementReference extends Pointer { +public class FutureElementReference extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public FuturePtrElementReference(Pointer p) { super(p); } + public FutureElementReference(Pointer p) { super(p); } - public native @Name("operator std::conditional_t >::type>::value,const c10::intrusive_ptr&,c10::intrusive_ptr >") @ByVal FuturePtr getFuturePtr(); + public native @Name("operator std::conditional_t >::type>,const c10::intrusive_ptr&,c10::intrusive_ptr >") @IntrusivePtr("c10::ivalue::Future") Future getFuture(); @@ -35,8 +36,8 @@ public class FuturePtrElementReference extends Pointer { public native @Const @ByRef IValue get(); - private static native @Namespace void swap(@ByRef(true) FuturePtrElementReference lhs, @ByRef(true) FuturePtrElementReference rhs); - public void swap(FuturePtrElementReference rhs) { swap(this, rhs); } + private static native @Namespace @NoException(true) void swap(@ByRef(true) FutureElementReference lhs, @ByRef(true) FutureElementReference rhs); + public void swap(FutureElementReference rhs) { swap(this, rhs); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureList.java similarity index 72% rename from pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrList.java rename to pytorch/src/gen/java/org/bytedeco/pytorch/FutureList.java index c75b294c5de..8329afaae02 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,29 +13,31 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::List >") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class FuturePtrList extends Pointer { +@Name("c10::List >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class FutureList extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public FuturePtrList(Pointer p) { super(p); } + public FutureList(Pointer p) { super(p); } /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public FuturePtrList(long size) { super((Pointer)null); allocateArray(size); } + public FutureList(long size) { super((Pointer)null); allocateArray(size); } private native void allocateArray(long size); - @Override public FuturePtrList position(long position) { - return (FuturePtrList)super.position(position); + @Override public FutureList position(long position) { + return (FutureList)super.position(position); } - @Override public FuturePtrList getPointer(long i) { - return new FuturePtrList((Pointer)this).offsetAddress(i); + @Override public FutureList getPointer(long i) { + return new FutureList((Pointer)this).offsetAddress(i); } /** * Constructs an empty list. */ - public FuturePtrList() { super((Pointer)null); allocate(); } + public FutureList() { super((Pointer)null); allocate(); } private native void allocate(); /** @@ -44,32 +45,33 @@ public class FuturePtrList extends Pointer { * Example: * List a({2, 3, 4}); */ - public FuturePtrList(@ByVal FuturePtrArrayRef initial_values) { super((Pointer)null); allocate(initial_values); } - private native void allocate(@ByVal FuturePtrArrayRef initial_values); + public FutureList(@ByVal FutureArrayRef initial_values) { super((Pointer)null); allocate(initial_values); } + private native void allocate(@ByVal FutureArrayRef initial_values); /** * Create a generic list with runtime type information. * This only works for c10::impl::GenericList and is not part of the public API * but only supposed to be used internally by PyTorch. */ - + public FutureList(@ByVal Type.TypePtr elementType) { super((Pointer)null); allocate(elementType); } + private native void allocate(@ByVal Type.TypePtr elementType); - public FuturePtrList(@Const @ByRef FuturePtrList arg0) { super((Pointer)null); allocate(arg0); } - private native void allocate(@Const @ByRef FuturePtrList arg0); - public native @ByRef @Name("operator =") FuturePtrList put(@Const @ByRef FuturePtrList arg0); + public FutureList(@Const @ByRef FutureList arg0) { super((Pointer)null); allocate(arg0); } + private native void allocate(@Const @ByRef FutureList arg0); + public native @ByRef @Name("operator =") FutureList put(@Const @ByRef FutureList arg0); /** * Create a new List pointing to a deep copy of the same data. * The List returned is a new list with separate storage. * Changes in it are not reflected in the original list or vice versa. */ - public native @ByVal FuturePtrList copy(); + public native @ByVal FutureList copy(); /** * Returns the element at specified location pos, with bounds checking. * If pos is not within the range of the container, an exception of type std::out_of_range is thrown. */ - public native @ByVal FuturePtr get(long pos); + public native @IntrusivePtr("c10::ivalue::Future") Future get(long pos); /** * Moves out the element at the specified location pos and returns it, with bounds checking. @@ -77,7 +79,7 @@ public class FuturePtrList extends Pointer { * The list contains an invalid element at position pos afterwards. Any operations * on it before re-setting it are invalid. */ - public native @ByVal FuturePtr extract(long pos); + public native @IntrusivePtr("c10::ivalue::Future") Future extract(long pos); /** * Returns a reference to the element at specified location pos, with bounds checking. @@ -96,7 +98,7 @@ public class FuturePtrList extends Pointer { /** * Assigns a new value to the element at location pos. */ - public native void set(long pos, @ByVal FuturePtr value); + public native void set(long pos, @IntrusivePtr("c10::ivalue::Future") Future value); /** * Assigns a new value to the element at location pos. @@ -106,13 +108,13 @@ public class FuturePtrList extends Pointer { * Returns an iterator to the first element of the container. * If the container is empty, the returned iterator will be equal to end(). */ - public native @ByVal @Cast("c10::List >::iterator*") FuturePtrListIterator begin(); + public native @ByVal @Cast("c10::List >::iterator*") FutureListIterator begin(); /** * Returns an iterator to the element following the last element of the container. * This element acts as a placeholder; attempting to access it results in undefined behavior. */ - public native @ByVal @Cast("c10::List >::iterator*") FuturePtrListIterator end(); + public native @ByVal @Cast("c10::List >::iterator*") FutureListIterator end(); /** * Checks if the container has no elements. @@ -139,7 +141,7 @@ public class FuturePtrList extends Pointer { * Inserts value before pos. * May invalidate any references, pointers, or iterators referring to contained elements. Any past-the-end iterators may also be invalidated. */ - public native @ByVal @Cast("c10::List >::iterator*") FuturePtrListIterator insert(@ByVal @Cast("c10::List >::iterator*") FuturePtrListIterator pos, @Const @ByRef FuturePtr value); + public native @ByVal @Cast("c10::List >::iterator*") FutureListIterator insert(@ByVal @Cast("c10::List >::iterator*") FutureListIterator pos, @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future value); /** * Inserts value before pos. @@ -156,7 +158,7 @@ public class FuturePtrList extends Pointer { * Appends the given element value to the end of the container. * May invalidate any references, pointers, or iterators referring to contained elements. Any past-the-end iterators may also be invalidated. */ - public native void push_back(@Const @ByRef FuturePtr value); + public native void push_back(@IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future value); /** * Appends the given element value to the end of the container. @@ -167,7 +169,7 @@ public class FuturePtrList extends Pointer { * Appends the given list to the end of the container. Uses at most one memory allocation. * May invalidate any references, pointers, or iterators referring to contained elements. Any past-the-end iterators may also be invalidated. */ - public native void append(@ByVal FuturePtrList lst); + public native void append(@ByVal FutureList lst); /** * Appends the given element value to the end of the container. @@ -179,13 +181,13 @@ public class FuturePtrList extends Pointer { * Removes the element at pos. * May invalidate any references, pointers, or iterators referring to contained elements. Any past-the-end iterators may also be invalidated. */ - public native @ByVal @Cast("c10::List >::iterator*") FuturePtrListIterator erase(@ByVal @Cast("c10::List >::iterator*") FuturePtrListIterator pos); + public native @ByVal @Cast("c10::List >::iterator*") FutureListIterator erase(@ByVal @Cast("c10::List >::iterator*") FutureListIterator pos); /** * Removes the elements in the range [first, last). * May invalidate any references, pointers, or iterators referring to contained elements. Any past-the-end iterators may also be invalidated. */ - public native @ByVal @Cast("c10::List >::iterator*") FuturePtrListIterator erase(@ByVal @Cast("c10::List >::iterator*") FuturePtrListIterator first, @ByVal @Cast("c10::List >::iterator*") FuturePtrListIterator last); + public native @ByVal @Cast("c10::List >::iterator*") FutureListIterator erase(@ByVal @Cast("c10::List >::iterator*") FutureListIterator first, @ByVal @Cast("c10::List >::iterator*") FutureListIterator last); /** * Removes the last element of the container. @@ -206,7 +208,7 @@ public class FuturePtrList extends Pointer { * If the current size is less than count, additional copies of value are appended. * May invalidate any references, pointers, or iterators referring to contained elements. Any past-the-end iterators may also be invalidated. */ - public native void resize(long count, @Const @ByRef FuturePtr value); + public native void resize(long count, @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future value); /** * Value equality comparison. This function implements Python-like semantics for @@ -221,9 +223,9 @@ public class FuturePtrList extends Pointer { * Identity comparison. Returns true if and only if {@code rhs} represents the same * List object as {@code this}. */ - public native @Cast("bool") boolean is(@Const @ByRef FuturePtrList rhs); + public native @Cast("bool") boolean is(@Const @ByRef FutureList rhs); - public native @StdVector FuturePtr vec(); + public native @ByVal FutureVector vec(); /** * Returns the number of Lists currently pointing to this same list. diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FutureListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureListIterator.java new file mode 100644 index 00000000000..b250065c5fb --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureListIterator.java @@ -0,0 +1,85 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("c10::impl::ListIterator,c10::detail::ListImpl::list_type::iterator>") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class FutureListIterator extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public FutureListIterator(Pointer p) { super(p); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public FutureListIterator(long size) { super((Pointer)null); allocateArray(size); } + private native void allocateArray(long size); + @Override public FutureListIterator position(long position) { + return (FutureListIterator)super.position(position); + } + @Override public FutureListIterator getPointer(long i) { + return new FutureListIterator((Pointer)this).offsetAddress(i); + } + + // C++17 friendly std::iterator implementation + + public FutureListIterator() { super((Pointer)null); allocate(); } + private native void allocate(); + + public FutureListIterator(@Const @ByRef FutureListIterator arg0) { super((Pointer)null); allocate(arg0); } + private native void allocate(@Const @ByRef FutureListIterator arg0); + public native @ByRef @Name("operator =") FutureListIterator put(@Const @ByRef FutureListIterator arg0); + + public native @ByRef @Name("operator ++") FutureListIterator increment(); + + public native @ByVal @Name("operator ++") FutureListIterator increment(int arg0); + + public native @ByRef @Name("operator --") FutureListIterator decrement(); + + public native @ByVal @Name("operator --") FutureListIterator decrement(int arg0); + + public native @ByRef @Name("operator +=") FutureListIterator addPut(long offset); + + public native @ByRef @Name("operator -=") FutureListIterator subtractPut(long offset); + + public native @ByVal @Name("operator +") FutureListIterator add(long offset); + + public native @ByVal @Name("operator -") FutureListIterator subtract(long offset); + + private static native @Namespace @Cast("c10::impl::ListIterator,c10::detail::ListImpl::list_type::iterator>::difference_type") @Name("operator -") long subtract(@Const @ByRef FutureListIterator lhs, @Const @ByRef FutureListIterator rhs); + public long subtract(FutureListIterator rhs) { return subtract(this, rhs); } + + + + + + private static native @Namespace @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef FutureListIterator lhs, @Const @ByRef FutureListIterator rhs); + public boolean equals(FutureListIterator rhs) { return equals(this, rhs); } + + private static native @Namespace @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef FutureListIterator lhs, @Const @ByRef FutureListIterator rhs); + public boolean notEquals(FutureListIterator rhs) { return notEquals(this, rhs); } + + private static native @Namespace @Cast("bool") @Name("operator <") boolean lessThan(@Const @ByRef FutureListIterator lhs, @Const @ByRef FutureListIterator rhs); + public boolean lessThan(FutureListIterator rhs) { return lessThan(this, rhs); } + + private static native @Namespace @Cast("bool") @Name("operator <=") boolean lessThanEquals(@Const @ByRef FutureListIterator lhs, @Const @ByRef FutureListIterator rhs); + public boolean lessThanEquals(FutureListIterator rhs) { return lessThanEquals(this, rhs); } + + private static native @Namespace @Cast("bool") @Name("operator >") boolean greaterThan(@Const @ByRef FutureListIterator lhs, @Const @ByRef FutureListIterator rhs); + public boolean greaterThan(FutureListIterator rhs) { return greaterThan(this, rhs); } + + private static native @Namespace @Cast("bool") @Name("operator >=") boolean greaterThanEquals(@Const @ByRef FutureListIterator lhs, @Const @ByRef FutureListIterator rhs); + public boolean greaterThanEquals(FutureListIterator rhs) { return greaterThanEquals(this, rhs); } +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtr.java deleted file mode 100644 index e1e3298ceb1..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class FuturePtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public FuturePtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public FuturePtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public FuturePtr position(long position) { - return (FuturePtr)super.position(position); - } - @Override public FuturePtr getPointer(long i) { - return new FuturePtr((Pointer)this).offsetAddress(i); - } - - - public FuturePtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public FuturePtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public FuturePtr(Future target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(Future target, @ByVal DontIncreaseRefcount arg1); - - - - public FuturePtr(@ByRef(true) FuturePtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) FuturePtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) FuturePtr put(@ByRef(true) FuturePtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) Future get(); - - public native @ByRef @Name("operator *") @NoException(true) Future multiply(); - - public native @Name("operator ->") @NoException(true) Future access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef FuturePtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) Future release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal FuturePtr reclaim(Future owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal FuturePtr reclaim_copy(Future owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal FuturePtr unsafe_steal_from_new(Future raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal FuturePtr unsafe_adapt_non_heap_allocated( - Future raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal FuturePtr unsafe_reclaim_from_nonowning(Future raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrListIterator.java deleted file mode 100644 index efebd1b6261..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FuturePtrListIterator.java +++ /dev/null @@ -1,84 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - -@Name("c10::impl::ListIterator,c10::detail::ListImpl::list_type::iterator>") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class FuturePtrListIterator extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public FuturePtrListIterator(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public FuturePtrListIterator(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public FuturePtrListIterator position(long position) { - return (FuturePtrListIterator)super.position(position); - } - @Override public FuturePtrListIterator getPointer(long i) { - return new FuturePtrListIterator((Pointer)this).offsetAddress(i); - } - - // C++17 friendly std::iterator implementation - - public FuturePtrListIterator() { super((Pointer)null); allocate(); } - private native void allocate(); - - public FuturePtrListIterator(@Const @ByRef FuturePtrListIterator arg0) { super((Pointer)null); allocate(arg0); } - private native void allocate(@Const @ByRef FuturePtrListIterator arg0); - public native @ByRef @Name("operator =") FuturePtrListIterator put(@Const @ByRef FuturePtrListIterator arg0); - - public native @ByRef @Name("operator ++") FuturePtrListIterator increment(); - - public native @ByVal @Name("operator ++") FuturePtrListIterator increment(int arg0); - - public native @ByRef @Name("operator --") FuturePtrListIterator decrement(); - - public native @ByVal @Name("operator --") FuturePtrListIterator decrement(int arg0); - - public native @ByRef @Name("operator +=") FuturePtrListIterator addPut(long offset); - - public native @ByRef @Name("operator -=") FuturePtrListIterator subtractPut(long offset); - - public native @ByVal @Name("operator +") FuturePtrListIterator add(long offset); - - public native @ByVal @Name("operator -") FuturePtrListIterator subtract(long offset); - - private static native @Namespace @Cast("c10::impl::ListIterator,c10::detail::ListImpl::list_type::iterator>::difference_type") @Name("operator -") long subtract(@Const @ByRef FuturePtrListIterator lhs, @Const @ByRef FuturePtrListIterator rhs); - public long subtract(FuturePtrListIterator rhs) { return subtract(this, rhs); } - - - - - - private static native @Namespace @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef FuturePtrListIterator lhs, @Const @ByRef FuturePtrListIterator rhs); - public boolean equals(FuturePtrListIterator rhs) { return equals(this, rhs); } - - private static native @Namespace @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef FuturePtrListIterator lhs, @Const @ByRef FuturePtrListIterator rhs); - public boolean notEquals(FuturePtrListIterator rhs) { return notEquals(this, rhs); } - - private static native @Namespace @Cast("bool") @Name("operator <") boolean lessThan(@Const @ByRef FuturePtrListIterator lhs, @Const @ByRef FuturePtrListIterator rhs); - public boolean lessThan(FuturePtrListIterator rhs) { return lessThan(this, rhs); } - - private static native @Namespace @Cast("bool") @Name("operator <=") boolean lessThanEquals(@Const @ByRef FuturePtrListIterator lhs, @Const @ByRef FuturePtrListIterator rhs); - public boolean lessThanEquals(FuturePtrListIterator rhs) { return lessThanEquals(this, rhs); } - - private static native @Namespace @Cast("bool") @Name("operator >") boolean greaterThan(@Const @ByRef FuturePtrListIterator lhs, @Const @ByRef FuturePtrListIterator rhs); - public boolean greaterThan(FuturePtrListIterator rhs) { return greaterThan(this, rhs); } - - private static native @Namespace @Cast("bool") @Name("operator >=") boolean greaterThanEquals(@Const @ByRef FuturePtrListIterator lhs, @Const @ByRef FuturePtrListIterator rhs); - public boolean greaterThanEquals(FuturePtrListIterator rhs) { return greaterThanEquals(this, rhs); } -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FutureSingleElementType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureSingleElementType.java index 9c7ccd12a58..74afa34c85c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FutureSingleElementType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureSingleElementType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FutureType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureType.java index 413d60c4e02..ee8f60e3c4a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/FutureType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/FutureVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureVector.java new file mode 100644 index 00000000000..63490e5d21b --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/FutureVector.java @@ -0,0 +1,91 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class FutureVector extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public FutureVector(Pointer p) { super(p); } + public FutureVector(@Cast({"", "c10::intrusive_ptr&"}) Future value) { this(1); put(0, value); } + public FutureVector(@Cast({"", "c10::intrusive_ptr&"}) Future ... array) { this(array.length); put(array); } + public FutureVector() { allocate(); } + public FutureVector(long n) { allocate(n); } + private native void allocate(); + private native void allocate(@Cast("size_t") long n); + public native @Name("operator =") @ByRef FutureVector put(@ByRef FutureVector x); + + public boolean empty() { return size() == 0; } + public native long size(); + public void clear() { resize(0); } + public native void resize(@Cast("size_t") long n); + + public Future front() { return get(0); } + public Future back() { return get(size() - 1); } + @Index(function = "at") public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future get(@Cast("size_t") long i); + public native FutureVector put(@Cast("size_t") long i, Future value); + + public native @ByVal Iterator insert(@ByVal Iterator pos, @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future value); + public native @ByVal Iterator erase(@ByVal Iterator pos); + public native @ByVal Iterator begin(); + public native @ByVal Iterator end(); + @NoOffset @Name("iterator") public static class Iterator extends Pointer { + public Iterator(Pointer p) { super(p); } + public Iterator() { } + + public native @Name("operator ++") @ByRef Iterator increment(); + public native @Name("operator ==") boolean equals(@ByRef Iterator it); + public native @Name("operator *") @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future get(); + } + + public Future[] get() { + Future[] array = new Future[size() < Integer.MAX_VALUE ? (int)size() : Integer.MAX_VALUE]; + for (int i = 0; i < array.length; i++) { + array[i] = get(i); + } + return array; + } + @Override public String toString() { + return java.util.Arrays.toString(get()); + } + + public Future pop_back() { + long size = size(); + Future value = get(size - 1); + resize(size - 1); + return value; + } + public FutureVector push_back(Future value) { + long size = size(); + resize(size + 1); + return put(size, value); + } + public FutureVector put(Future value) { + if (size() != 1) { resize(1); } + return put(0, value); + } + public FutureVector put(Future ... array) { + if (size() != array.length) { resize(array.length); } + for (int i = 0; i < array.length; i++) { + put(i, array[i]); + } + return this; + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GELUImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GELUImpl.java index 6ff05b5f0fa..ff228caa29a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GELUImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GELUImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ GELU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies gelu over a given input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.GELU to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.GELU to learn * about the exact behavior of this module. */ @Namespace("torch::nn") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class GELUImpl extends GELUImplCloneable { diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GELUImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GELUImplCloneable.java index d3a80abcaa3..205c9c28a55 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GELUImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GELUImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class GELUImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GELUOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GELUOptions.java index 67042257a77..d9b649e340b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GELUOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GELUOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GLUImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GLUImpl.java index c5376d0194b..3d1e0c316f0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GLUImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GLUImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ GLU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies glu over a given input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.GLU to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.GLU to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::GLUOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GLUImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GLUImplCloneable.java index 6da711b14c2..d15a9829227 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GLUImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GLUImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class GLUImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GLUOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GLUOptions.java index be435e2b043..968c82abfb3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GLUOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GLUOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImpl.java index 4aa5b267624..692eb6db2cd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** A gated recurrent unit (GRU) cell. - * See https://pytorch.org/docs/master/nn.html#torch.nn.GRUCell to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.GRUCell to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::GRUCellOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImplBase.java index c8cec2ad177..09ab1e387e1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImplCloneable.java index f2254938541..1900809c054 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class GRUCellImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellOptions.java index 4970a9ffe1d..82cd014db79 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUCellOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImpl.java index e22f2a1275b..1835d068b1f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ GRU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** A multi-layer gated recurrent unit (GRU) module. - * See https://pytorch.org/docs/master/generated/torch.nn.GRU.html to learn + * See https://pytorch.org/docs/main/generated/torch.nn.GRU.html to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::GRUOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImplBase.java index fb09dff96af..90a66c10891 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImplCloneable.java index 38524131315..0961973ff5d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class GRUImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUOptions.java index c7975f7cce7..aa22e03e477 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GRUOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GRUOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GatherOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GatherOptions.java new file mode 100644 index 00000000000..c47b821b8d3 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GatherOptions.java @@ -0,0 +1,42 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class GatherOptions extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public GatherOptions() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public GatherOptions(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public GatherOptions(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public GatherOptions position(long position) { + return (GatherOptions)super.position(position); + } + @Override public GatherOptions getPointer(long i) { + return new GatherOptions((Pointer)this).offsetAddress(i); + } + + public native @Cast("int64_t") long rootRank(); public native GatherOptions rootRank(long setter); + public native @ByRef Milliseconds timeout(); public native GatherOptions timeout(Milliseconds setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GatheredContext.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GatheredContext.java index 9c7ac7de8f2..e6019bff8e3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GatheredContext.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GatheredContext.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Generator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Generator.java index b6deac700ad..316e7088fae 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Generator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Generator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -69,8 +70,8 @@ public class Generator extends Pointer { public Generator() { super((Pointer)null); allocate(); } private native void allocate(); - public Generator(@ByVal GeneratorImplPtr gen_impl) { super((Pointer)null); allocate(gen_impl); } - private native void allocate(@ByVal GeneratorImplPtr gen_impl); + public Generator(@IntrusivePtr("c10::GeneratorImpl") @Cast({"", "c10::intrusive_ptr&"}) GeneratorImpl gen_impl) { super((Pointer)null); allocate(gen_impl); } + private native void allocate(@IntrusivePtr("c10::GeneratorImpl") @Cast({"", "c10::intrusive_ptr&"}) GeneratorImpl gen_impl); public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef Generator rhs); @@ -82,7 +83,7 @@ public class Generator extends Pointer { public native GeneratorImpl unsafeReleaseGeneratorImpl(); - public native @Const @ByRef GeneratorImplPtr getIntrusivePtr(); + public native @IntrusivePtr("c10::GeneratorImpl") @Cast({"", "c10::intrusive_ptr&"}) GeneratorImpl getIntrusivePtr(); public native void set_current_seed(@Cast("uint64_t") long seed); // Sets the offset of Generator state to the desired offset. This is currently @@ -103,6 +104,10 @@ public class Generator extends Pointer { public native @ByVal Tensor get_state(); + public native void graphsafe_set_state(@Const @ByRef Generator new_state); + + public native @ByVal Generator graphsafe_get_state(); + public native @ByVal DispatchKeySet key_set(); public native @ByVal Device device(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorImpl.java index d87ae60d2ef..007e0d42145 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -31,7 +32,7 @@ public class GeneratorImpl extends Pointer { - public native @ByVal @Name("clone") GeneratorImplPtr clonePtr(); + public native @IntrusivePtr("c10::GeneratorImpl") @Name("clone") @Cast({"", "c10::intrusive_ptr&"}) GeneratorImpl clonePtr(); // Common methods for all generators public native void set_current_seed(@Cast("uint64_t") long seed); @@ -40,7 +41,10 @@ public class GeneratorImpl extends Pointer { public native @Cast("uint64_t") long current_seed(); public native @Cast("uint64_t") long seed(); public native void set_state(@Const @ByRef TensorImpl new_state); - public native @ByVal TensorImplPtr get_state(); + public native @IntrusivePtr("c10::TensorImpl") @Cast({"", "c10::intrusive_ptr&"}) TensorImpl get_state(); + public native void graphsafe_set_state( + @IntrusivePtr("c10::GeneratorImpl") @Cast({"", "c10::intrusive_ptr&"}) GeneratorImpl new_state); + public native @IntrusivePtr("c10::GeneratorImpl") @Cast({"", "c10::intrusive_ptr&"}) GeneratorImpl graphsafe_get_state(); public native @ByVal Device device(); // See Note [Acquire lock when using random generators] diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorImplPtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorImplPtr.java deleted file mode 100644 index 72996e68657..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorImplPtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class GeneratorImplPtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public GeneratorImplPtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public GeneratorImplPtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public GeneratorImplPtr position(long position) { - return (GeneratorImplPtr)super.position(position); - } - @Override public GeneratorImplPtr getPointer(long i) { - return new GeneratorImplPtr((Pointer)this).offsetAddress(i); - } - - - public GeneratorImplPtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public GeneratorImplPtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public GeneratorImplPtr(GeneratorImpl target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(GeneratorImpl target, @ByVal DontIncreaseRefcount arg1); - - - - public GeneratorImplPtr(@ByRef(true) GeneratorImplPtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) GeneratorImplPtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) GeneratorImplPtr put(@ByRef(true) GeneratorImplPtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) GeneratorImpl get(); - - public native @ByRef @Name("operator *") @NoException(true) GeneratorImpl multiply(); - - public native @Name("operator ->") @NoException(true) GeneratorImpl access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef GeneratorImplPtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) GeneratorImpl release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal GeneratorImplPtr reclaim(GeneratorImpl owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal GeneratorImplPtr reclaim_copy(GeneratorImpl owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal GeneratorImplPtr unsafe_steal_from_new(GeneratorImpl raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal GeneratorImplPtr unsafe_adapt_non_heap_allocated( - GeneratorImpl raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal GeneratorImplPtr unsafe_reclaim_from_nonowning(GeneratorImpl raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorOptional.java index c99688225dc..5a72bfe78aa 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class GeneratorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorType.java index c684cd347d9..386555547f0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorTypePtr.java index 731ae893415..5f403e8d1d7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GeneratorTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDict.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDict.java index e3a958cbf95..620a92bde70 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDict.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDict.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -34,7 +35,7 @@ * map implementation without breaking backwards compatibility * for the kernel API. */ -@Name("c10::Dict") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::Dict") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class GenericDict extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDictEntryRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDictEntryRef.java index 97d2141c6df..66f730a374c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDictEntryRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDictEntryRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDictIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDictIterator.java index aab39186008..ab98aa94494 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDictIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericDictIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericElementReference.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericElementReference.java index 3f0a27bb3a4..41f184ae40d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericElementReference.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericElementReference.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ public class GenericElementReference extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public GenericElementReference(Pointer p) { super(p); } - public native @Name("operator std::conditional_t::type>::value,const c10::IValue&,c10::IValue>") @ByVal IValue getGeneric(); + public native @Name("operator std::conditional_t::type>,const c10::IValue&,c10::IValue>") @ByVal IValue getGeneric(); @@ -35,7 +36,7 @@ public class GenericElementReference extends Pointer { public native @Const @ByRef IValue get(); - private static native @Namespace void swap(@ByRef(true) GenericElementReference lhs, @ByRef(true) GenericElementReference rhs); + private static native @Namespace @NoException(true) void swap(@ByRef(true) GenericElementReference lhs, @ByRef(true) GenericElementReference rhs); public void swap(GenericElementReference rhs) { swap(this, rhs); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericList.java index 392299e3fe5..28f562288d7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::List") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::List") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class GenericList extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericListIterator.java index c2dee0683d9..41db8a84dcb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GenericListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GenericListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Global.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Global.java index fbd303a1275..8f457ff0f3b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Global.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Global.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Global extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Global(Pointer p) { super(p); } - public Global(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Global(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal IdentList names(); public static native @ByVal Global create(@Const @ByRef SourceRange range, @Const @ByRef IdentList names); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GlooDeviceVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GlooDeviceVector.java new file mode 100644 index 00000000000..b02314c18d9 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GlooDeviceVector.java @@ -0,0 +1,91 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class GlooDeviceVector extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public GlooDeviceVector(Pointer p) { super(p); } + public GlooDeviceVector(org.bytedeco.pytorch.gloo.Device value) { this(1); put(0, value); } + public GlooDeviceVector(org.bytedeco.pytorch.gloo.Device ... array) { this(array.length); put(array); } + public GlooDeviceVector() { allocate(); } + public GlooDeviceVector(long n) { allocate(n); } + private native void allocate(); + private native void allocate(@Cast("size_t") long n); + public native @Name("operator =") @ByRef GlooDeviceVector put(@ByRef GlooDeviceVector x); + + public boolean empty() { return size() == 0; } + public native long size(); + public void clear() { resize(0); } + public native void resize(@Cast("size_t") long n); + + public org.bytedeco.pytorch.gloo.Device front() { return get(0); } + public org.bytedeco.pytorch.gloo.Device back() { return get(size() - 1); } + @Index(function = "at") public native @SharedPtr org.bytedeco.pytorch.gloo.Device get(@Cast("size_t") long i); + public native GlooDeviceVector put(@Cast("size_t") long i, org.bytedeco.pytorch.gloo.Device value); + + public native @ByVal Iterator insert(@ByVal Iterator pos, @SharedPtr org.bytedeco.pytorch.gloo.Device value); + public native @ByVal Iterator erase(@ByVal Iterator pos); + public native @ByVal Iterator begin(); + public native @ByVal Iterator end(); + @NoOffset @Name("iterator") public static class Iterator extends Pointer { + public Iterator(Pointer p) { super(p); } + public Iterator() { } + + public native @Name("operator ++") @ByRef Iterator increment(); + public native @Name("operator ==") boolean equals(@ByRef Iterator it); + public native @Name("operator *") @SharedPtr @Const org.bytedeco.pytorch.gloo.Device get(); + } + + public org.bytedeco.pytorch.gloo.Device[] get() { + org.bytedeco.pytorch.gloo.Device[] array = new org.bytedeco.pytorch.gloo.Device[size() < Integer.MAX_VALUE ? (int)size() : Integer.MAX_VALUE]; + for (int i = 0; i < array.length; i++) { + array[i] = get(i); + } + return array; + } + @Override public String toString() { + return java.util.Arrays.toString(get()); + } + + public org.bytedeco.pytorch.gloo.Device pop_back() { + long size = size(); + org.bytedeco.pytorch.gloo.Device value = get(size - 1); + resize(size - 1); + return value; + } + public GlooDeviceVector push_back(org.bytedeco.pytorch.gloo.Device value) { + long size = size(); + resize(size + 1); + return put(size, value); + } + public GlooDeviceVector put(org.bytedeco.pytorch.gloo.Device value) { + if (size() != 1) { resize(1); } + return put(0, value); + } + public GlooDeviceVector put(org.bytedeco.pytorch.gloo.Device ... array) { + if (size() != array.length) { resize(array.length); } + for (int i = 0; i < array.length; i++) { + put(i, array[i]); + } + return this; + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GradBucket.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GradBucket.java new file mode 100644 index 00000000000..f5381868035 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GradBucket.java @@ -0,0 +1,73 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +// This class passes bucket contents tensor to DDP communication hook. +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class GradBucket extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public GradBucket(Pointer p) { super(p); } + + public GradBucket( + @Cast("size_t") long index, + @Cast("size_t") long bucket_count, + @ByVal Tensor tensor, + @ByVal @Cast("std::vector*") SizeTVector offsets, + @ByVal @Cast("std::vector*") SizeTVector lengths, + @ByVal LongArrayRefVector sizes_vec, + @ByVal TensorVector parameters, + @ByVal TensorOptional sparse_grad_indices) { super((Pointer)null); allocate(index, bucket_count, tensor, offsets, lengths, sizes_vec, parameters, sparse_grad_indices); } + private native void allocate( + @Cast("size_t") long index, + @Cast("size_t") long bucket_count, + @ByVal Tensor tensor, + @ByVal @Cast("std::vector*") SizeTVector offsets, + @ByVal @Cast("std::vector*") SizeTVector lengths, + @ByVal LongArrayRefVector sizes_vec, + @ByVal TensorVector parameters, + @ByVal TensorOptional sparse_grad_indices); + + // Returns the index of the bucket, which is unique across all the buckets. + public native @Cast("size_t") long getIndex(); + + public native @Const @ByRef Tensor getBuffer(); + + // Returns a mutable buffer compared with the above method. + public native @ByRef Tensor getBufferRef(); + + // Overwrites the buffer at a specific index. + public native void setBuffer(@ByRef Tensor buffer); + + // Each tensor in the list that getGradients corresponds to a + // parameter. + public native @ByVal TensorVector getGradients(); + + // Returns model parameters belonging to this bucket. They are returned in the + // same order as gradient tensors via getGradients(). For example, + // getParameters[i] will have its gradient stored in + // getGradients[i] + public native @Const @ByVal TensorVector getParameters(); + + // Returns whther this bucket is the last bucket to allreduce in an iteration. + public native @Cast("bool") boolean isLast(); + + public native @ByRef TensorOptional getSparseGradIndices(); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GradMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GradMode.java index 04d7840f3c6..3beb4cd64ac 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GradMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GradMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Graph.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Graph.java index dc99d45c137..88709059cef 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Graph.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Graph.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -173,8 +174,8 @@ public native JitNode createClone( // Insert constant IValue into the graph. public native Value insertConstant( @Const @ByRef IValue val, - @ByVal(nullValue = "c10::optional(c10::nullopt)") SourceRangeOptional loc, - @ByVal(nullValue = "c10::optional(c10::nullopt)") @Cast("c10::optional*") ScopeOptional scope); + @ByVal(nullValue = "std::optional(c10::nullopt)") SourceRangeOptional loc, + @ByVal(nullValue = "std::optional(c10::nullopt)") @Cast("std::optional*") ScopeOptional scope); public native Value insertConstant( @Const @ByRef IValue val); @@ -189,7 +190,7 @@ public native Value insert( @ByVal Symbol opname, @ByVal NamedValueArrayRef args, @ByVal(nullValue = "at::ArrayRef{}") NamedValueArrayRef kwargs, - @Const @ByRef(nullValue = "c10::optional{}") SourceRangeOptional range); + @Const @ByRef(nullValue = "std::optional{}") SourceRangeOptional range); public native Value insert( @ByVal Symbol opname, @ByVal NamedValueArrayRef args); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphAttr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphAttr.java index 591c286f94b..e1b81159cc6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphAttr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphAttr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutor.java index ae8667792d3..bc03282ad5f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutor.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -57,10 +58,10 @@ private native void allocate( @Cast("torch::jit::ExecutorExecutionMode") int executor_mode); public native void run(@ByRef IValueVector inputs); - public native @ByVal FuturePtr runAsync( + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future runAsync( @ByRef IValueVector stack, @ByVal(nullValue = "torch::jit::TaskLauncher(at::launch)") @Cast("torch::jit::TaskLauncher*") Pointer taskLauncher); - public native @ByVal FuturePtr runAsync( + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future runAsync( @ByRef IValueVector stack); // `remaining_bailout_depth` stands for the maximum number of profiled and @@ -76,7 +77,7 @@ private native void allocate( // current global fusion strategy settings. public native @Const @ByRef ExecutionPlan getPlanFor( @ByRef IValueVector inputs, - @ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional remaining_bailout_depth); + @ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional remaining_bailout_depth); public native @Const @ByRef ExecutionPlan getPlanFor( @ByRef IValueVector inputs); public native @ByVal GraphExecutorState getDebugState(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutorImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutorImplBase.java index e75c1d941c2..83d40b78c7b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutorImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutorImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutorState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutorState.java index 84ade44a006..7711a0c44b3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutorState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphExecutorState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphFunction.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphFunction.java index 534b360e810..dcfd1622b2f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphFunction.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphFunction.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -29,12 +30,12 @@ public GraphFunction( @ByVal QualifiedName name, @SharedPtr("torch::jit::Graph") @ByVal Graph graph, @ByVal GraphFunctionCreator function_creator, - @ByVal(nullValue = "c10::optional(c10::nullopt)") ExecutorExecutionModeOptional executor_execution_mode) { super((Pointer)null); allocate(name, graph, function_creator, executor_execution_mode); } + @ByVal(nullValue = "std::optional(c10::nullopt)") ExecutorExecutionModeOptional executor_execution_mode) { super((Pointer)null); allocate(name, graph, function_creator, executor_execution_mode); } private native void allocate( @ByVal QualifiedName name, @SharedPtr("torch::jit::Graph") @ByVal Graph graph, @ByVal GraphFunctionCreator function_creator, - @ByVal(nullValue = "c10::optional(c10::nullopt)") ExecutorExecutionModeOptional executor_execution_mode); + @ByVal(nullValue = "std::optional(c10::nullopt)") ExecutorExecutionModeOptional executor_execution_mode); public GraphFunction( @ByVal QualifiedName name, @SharedPtr("torch::jit::Graph") @ByVal Graph graph, @@ -50,10 +51,10 @@ private native void allocate( - public native @ByVal FuturePtr runAsync( + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future runAsync( @ByRef IValueVector stack, @ByVal(nullValue = "torch::jit::TaskLauncher(at::launch)") @Cast("torch::jit::TaskLauncher*") Pointer taskLauncher); - public native @ByVal FuturePtr runAsync( + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future runAsync( @ByRef IValueVector stack); public native @SharedPtr("torch::jit::Graph") @ByVal Graph graph(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphOptimizerEnabledGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphOptimizerEnabledGuard.java index 0f650fedba5..6fa998987ad 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphOptimizerEnabledGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphOptimizerEnabledGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphVector.java index 1a01b562988..5bb1fc1f460 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphsAttr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphsAttr.java index e16b610f800..c04162e43c5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GraphsAttr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GraphsAttr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GridSampleFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GridSampleFuncOptions.java index 482891f6dbc..7edfeea5432 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GridSampleFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GridSampleFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GridSampleMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GridSampleMode.java index 611d26543ea..1007ae5b819 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GridSampleMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GridSampleMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GridSamplePaddingMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GridSamplePaddingMode.java index f0a0d579a16..04469eeca8f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GridSamplePaddingMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GridSamplePaddingMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormFuncOptions.java index 317d2492f44..58f5f469206 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormImpl.java index c0d64cffc5c..6053acdb54a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ /** Applies Group Normalization over a mini-batch of inputs as described in * the paper {@code Group Normalization}_ . - * See https://pytorch.org/docs/master/nn.html#torch.nn.GroupNorm to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.GroupNorm to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::GroupNormOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormImplCloneable.java index 24c1513a983..78c3d08255c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class GroupNormImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormOptions.java index 3847a1729c6..75064e3a5ce 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GroupNormOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/GumbelSoftmaxFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/GumbelSoftmaxFuncOptions.java index bc4d44181e8..92bca0308d8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/GumbelSoftmaxFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/GumbelSoftmaxFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HIPHooksArgs.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HIPHooksArgs.java index 1c24ac7573b..aeab6be96f6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HIPHooksArgs.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HIPHooksArgs.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HIPHooksInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HIPHooksInterface.java index 4069d8ea30c..8da8d17f9c6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HIPHooksInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HIPHooksInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Half.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Half.java index 3447c5fca84..42213b00ffe 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Half.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Half.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HalfArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HalfArrayRef.java index 3e91f84d355..21fb9018e24 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HalfArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HalfArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HalfComplex.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HalfComplex.java index 366d336e62f..9e48e5d477c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HalfComplex.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HalfComplex.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkImpl.java index 6b03ce788e8..99dda025c97 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Hardshrink ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the hard shrinkage function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Hardshrink to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Hardshrink to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::HardshrinkOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkImplCloneable.java index 49339277be2..426499c46c4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class HardshrinkImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkOptions.java index 81c978e4c4b..5e05d04bfb0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HardshrinkOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhImpl.java index fbb5887439f..754d31c2861 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Hardtanh ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the HardTanh function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Hardtanh to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Hardtanh to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::HardtanhOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhImplCloneable.java index b65b8fddb4e..4989af3c526 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class HardtanhImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhOptions.java index 12d97fa8d18..412aea448f4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HardtanhOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HashAliasedIValueMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HashAliasedIValueMap.java index e55a205d539..7ca334ed067 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HashAliasedIValueMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HashAliasedIValueMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HashAliasedIValues.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HashAliasedIValues.java index c95c706878c..2b1136a34b6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HashAliasedIValues.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HashAliasedIValues.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HashIdentityIValueMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HashIdentityIValueMap.java new file mode 100644 index 00000000000..a51b52a10bb --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HashIdentityIValueMap.java @@ -0,0 +1,49 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("std::unordered_map") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class HashIdentityIValueMap extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public HashIdentityIValueMap(Pointer p) { super(p); } + public HashIdentityIValueMap() { allocate(); } + private native void allocate(); + public native @Name("operator =") @ByRef HashIdentityIValueMap put(@ByRef HashIdentityIValueMap x); + + public boolean empty() { return size() == 0; } + public native long size(); + + @Index public native @ByRef IValue get(@ByRef IValue i); + public native HashIdentityIValueMap put(@ByRef IValue i, IValue value); + + public native void erase(@ByVal Iterator pos); + public native @ByVal Iterator begin(); + public native @ByVal Iterator end(); + @NoOffset @Name("iterator") public static class Iterator extends Pointer { + public Iterator(Pointer p) { super(p); } + public Iterator() { } + + public native @Name("operator ++") @ByRef Iterator increment(); + public native @Name("operator ==") boolean equals(@ByRef Iterator it); + public native @Name("operator *().first") @MemberGetter @ByRef @Const IValue first(); + public native @Name("operator *().second") @MemberGetter @ByRef @Const IValue second(); + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HermeticPyObjectTLS.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HermeticPyObjectTLS.java index e9b79e0f566..919e320df15 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HermeticPyObjectTLS.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HermeticPyObjectTLS.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossImpl.java index ffcc664973e..cef1d69507d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -23,7 +24,7 @@ /** Creates a criterion that measures the loss given an input tensor :math:{@code x} * and a labels tensor :math:{@code y} (containing 1 or -1). - * See https://pytorch.org/docs/master/nn.html#torch.nn.HingeEmbeddingLoss to + * See https://pytorch.org/docs/main/nn.html#torch.nn.HingeEmbeddingLoss to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::HingeEmbeddingLossOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossImplCloneable.java index d4da78bd6d5..38ac8b2ff47 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class HingeEmbeddingLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossOptions.java index 5396d9eb270..aeec0ee39c2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HingeEmbeddingLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossImpl.java index 72b653abdc6..0b882067c63 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -23,7 +24,7 @@ /** Creates a criterion that uses a squared term if the absolute * element-wise error falls below delta and a delta-scaled L1 term otherwise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.HuberLoss to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.HuberLoss to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::HuberLossOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossImplCloneable.java index 6cf018deaca..076cd6e982d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class HuberLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossOptions.java index 4c26ae4f9f1..ea5c74d2b5d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/HuberLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IMethod.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IMethod.java index eed59590dfe..a504d508026 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IMethod.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IMethod.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IPUHooksArgs.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IPUHooksArgs.java index 16128406487..2a8983f36b9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IPUHooksArgs.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IPUHooksArgs.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IPUHooksInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IPUHooksInterface.java index c664a2f4f84..83cbe8f2edf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IPUHooksInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IPUHooksInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IStreamAdapter.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IStreamAdapter.java index ddd551112b6..eda04a75178 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IStreamAdapter.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IStreamAdapter.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IValue.java index 8befb02b029..3e8da1cd5ca 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -188,11 +189,11 @@ public class IValue extends Pointer { // Tuple - public IValue(@ByVal TuplePtr v) { super((Pointer)null); allocate(v); } - private native void allocate(@ByVal TuplePtr v); + public IValue(@IntrusivePtr("c10::ivalue::Tuple") @Cast({"", "c10::intrusive_ptr&"}) Tuple v) { super((Pointer)null); allocate(v); } + private native void allocate(@IntrusivePtr("c10::ivalue::Tuple") @Cast({"", "c10::intrusive_ptr&"}) Tuple v); public native @Cast("bool") boolean isTuple(); - public native @ByVal TuplePtr toTuple(); + public native @IntrusivePtr("c10::ivalue::Tuple") @Cast({"", "c10::intrusive_ptr&"}) Tuple toTuple(); public native @ByRef Tuple toTupleRef(); // Double @@ -206,31 +207,31 @@ public class IValue extends Pointer { public native @ByVal DoubleComplex toComplexDouble(); // Future - public IValue(@ByVal FuturePtr v) { super((Pointer)null); allocate(v); } - private native void allocate(@ByVal FuturePtr v); + public IValue(@IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future v) { super((Pointer)null); allocate(v); } + private native void allocate(@IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future v); public native @Cast("bool") boolean isFuture(); - public native @ByVal FuturePtr toFuture(); + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future toFuture(); - public IValue(@ByVal AwaitPtr v) { super((Pointer)null); allocate(v); } - private native void allocate(@ByVal AwaitPtr v); + public IValue(@IntrusivePtr("c10::ivalue::Await") @Cast({"", "c10::intrusive_ptr&"}) Await v) { super((Pointer)null); allocate(v); } + private native void allocate(@IntrusivePtr("c10::ivalue::Await") @Cast({"", "c10::intrusive_ptr&"}) Await v); public native @Cast("bool") boolean isAwait(); - public native @ByVal AwaitPtr toAwait(); + public native @IntrusivePtr("c10::ivalue::Await") @Cast({"", "c10::intrusive_ptr&"}) Await toAwait(); // RRef - public IValue(@ByVal RRefInterfacePtr v) { super((Pointer)null); allocate(v); } - private native void allocate(@ByVal RRefInterfacePtr v); + public IValue(@IntrusivePtr("c10::RRefInterface") @Cast({"", "c10::intrusive_ptr&"}) RRefInterface v) { super((Pointer)null); allocate(v); } + private native void allocate(@IntrusivePtr("c10::RRefInterface") @Cast({"", "c10::intrusive_ptr&"}) RRefInterface v); public native @Cast("bool") boolean isRRef(); - public native @ByVal RRefInterfacePtr toRRef(); + public native @IntrusivePtr("c10::RRefInterface") @Cast({"", "c10::intrusive_ptr&"}) RRefInterface toRRef(); // Quantizer - public IValue(@ByVal QuantizerPtr v) { super((Pointer)null); allocate(v); } - private native void allocate(@ByVal QuantizerPtr v); + public IValue(@IntrusivePtr("at::Quantizer") @Cast({"", "c10::intrusive_ptr&"}) Quantizer v) { super((Pointer)null); allocate(v); } + private native void allocate(@IntrusivePtr("at::Quantizer") @Cast({"", "c10::intrusive_ptr&"}) Quantizer v); public native @Cast("bool") boolean isQuantizer(); - public native @ByVal QuantizerPtr toQuantizer(); + public native @IntrusivePtr("at::Quantizer") @Cast({"", "c10::intrusive_ptr&"}) Quantizer toQuantizer(); // Int public IValue(@Cast("int64_t") long i) { super((Pointer)null); allocate(i); } @@ -284,17 +285,17 @@ public class IValue extends Pointer { public native @ByVal DimVector toDimVector(); // ConstantString - public IValue(@ByVal ConstantStringPtr v) { super((Pointer)null); allocate(v); } - private native void allocate(@ByVal ConstantStringPtr v); + public IValue(@IntrusivePtr("c10::ivalue::ConstantString") @Cast({"", "c10::intrusive_ptr&"}) ConstantString v) { super((Pointer)null); allocate(v); } + private native void allocate(@IntrusivePtr("c10::ivalue::ConstantString") @Cast({"", "c10::intrusive_ptr&"}) ConstantString v); public IValue(@StdString BytePointer v) { super((Pointer)null); allocate(v); } private native void allocate(@StdString BytePointer v); public IValue(@StdString String v) { super((Pointer)null); allocate(v); } private native void allocate(@StdString String v); public native @Cast("bool") boolean isString(); - public native @ByVal @Name("toString") ConstantStringPtr toConstantString(); + public native @IntrusivePtr("c10::ivalue::ConstantString") @Name("toString") @Cast({"", "c10::intrusive_ptr&"}) ConstantString toConstantString(); public native @StdString BytePointer toStringRef(); - public native @ByVal @Cast("c10::optional >*") Pointer toOptionalStringRef(); + public native @ByVal @Cast("std::optional >*") Pointer toOptionalStringRef(); public native @StringView BytePointer toStringView(); // DoubleList @@ -358,30 +359,30 @@ public class IValue extends Pointer { public native @ByVal GenericDict toGenericDict(); // ClassType - public IValue(@ByVal ObjPtr v) { super((Pointer)null); allocate(v); } - private native void allocate(@ByVal ObjPtr v); + public IValue(@IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj v) { super((Pointer)null); allocate(v); } + private native void allocate(@IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj v); public native @Cast("bool") boolean isObject(); - public native @ByVal ObjPtr toObject(); - public native @ByRef Object toObjectRef(); + public native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj toObject(); + public native @ByRef Obj toObjectRef(); public native @Cast("bool") boolean isModule(); // PyObject - public IValue(@ByVal PyObjectHolderPtr v) { super((Pointer)null); allocate(v); } - private native void allocate(@ByVal PyObjectHolderPtr v); + public IValue(@IntrusivePtr("c10::ivalue::PyObjectHolder") @Cast({"", "c10::intrusive_ptr&"}) PyObjectHolder v) { super((Pointer)null); allocate(v); } + private native void allocate(@IntrusivePtr("c10::ivalue::PyObjectHolder") @Cast({"", "c10::intrusive_ptr&"}) PyObjectHolder v); public native @Cast("bool") boolean isPyObject(); - public native @ByVal PyObjectHolderPtr toPyObjectHolder(); + public native @IntrusivePtr("c10::ivalue::PyObjectHolder") @Cast({"", "c10::intrusive_ptr&"}) PyObjectHolder toPyObjectHolder(); public native @Cast("PyObject*") Pointer toPyObject(); // Enum - public IValue(@ByVal EnumHolderPtr v) { super((Pointer)null); allocate(v); } - private native void allocate(@ByVal EnumHolderPtr v); + public IValue(@IntrusivePtr("c10::ivalue::EnumHolder") @Cast({"", "c10::intrusive_ptr&"}) EnumHolder v) { super((Pointer)null); allocate(v); } + private native void allocate(@IntrusivePtr("c10::ivalue::EnumHolder") @Cast({"", "c10::intrusive_ptr&"}) EnumHolder v); public native @Cast("bool") boolean isEnum(); - public native @ByVal EnumHolderPtr toEnumHolder(); + public native @IntrusivePtr("c10::ivalue::EnumHolder") @Cast({"", "c10::intrusive_ptr&"}) EnumHolder toEnumHolder(); // None public IValue() { super((Pointer)null); allocate(); } @@ -497,6 +498,46 @@ public class IValue extends Pointer { // Detect aliased tensors. + public static class HashIdentityIValue extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public HashIdentityIValue() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public HashIdentityIValue(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public HashIdentityIValue(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public HashIdentityIValue position(long position) { + return (HashIdentityIValue)super.position(position); + } + @Override public HashIdentityIValue getPointer(long i) { + return new HashIdentityIValue((Pointer)this).offsetAddress(i); + } + + public native @Cast("size_t") @Name("operator ()") long apply(@Const @ByRef IValue val); + } + + public static class CompIdentityIValues extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public CompIdentityIValues() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public CompIdentityIValues(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public CompIdentityIValues(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public CompIdentityIValues position(long position) { + return (CompIdentityIValues)super.position(position); + } + @Override public CompIdentityIValues getPointer(long i) { + return new CompIdentityIValues((Pointer)this).offsetAddress(i); + } + + public native @Cast("bool") @Name("operator ()") boolean apply(@Const @ByRef IValue lhs, @Const @ByRef IValue rhs); + } + // Chechs if this and rhs has a subvalues in common. // [t1,t2] and [t2, t3] returns true. public native @Cast("bool") boolean overlaps(@Const @ByRef IValue rhs); @@ -508,13 +549,13 @@ public class IValue extends Pointer { // TODO: There are several places that recurse over IValue. This is fragile. // This visitor should be used to recurse over ivalues. - public native @ByVal IValue deepcopy(@ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + public native @ByVal IValue deepcopy(@ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @ByVal IValue deepcopy(); public native @ByVal IValue deepcopy( - @ByRef HashAliasedIValueMap memo, - @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @ByRef HashIdentityIValueMap memo, + @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @ByVal IValue deepcopy( - @ByRef HashAliasedIValueMap memo); + @ByRef HashIdentityIValueMap memo); // Don't edit this just to add results for new tags; edit // isIntrusivePtrConstexpr above. public native @Cast("bool") boolean isIntrusivePtr(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IValueArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IValueArrayRef.java index bf2d206c354..1faf4b995b7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IValueArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IValueArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IValueOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IValueOptional.java index 98bacdca838..14366f8ca29 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IValueOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IValueOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class IValueOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IValueOptionalVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IValueOptionalVector.java index eb97c90a6a0..6201463c717 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IValueOptionalVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IValueOptionalVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class IValueOptionalVector extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IValueVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IValueVector.java index 6d5e09970ae..fac827e1bcc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IValueVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IValueVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Ident.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Ident.java index ebac329b4b4..0e442fe20cf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Ident.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Ident.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Ident extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Ident(Pointer p) { super(p); } - public Ident(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Ident(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @StdString BytePointer name(); public static native @ByVal Ident create(@Const @ByRef SourceRange range, @StdString BytePointer name); public static native @ByVal Ident create(@Const @ByRef SourceRange range, @StdString String name); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IdentList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IdentList.java index a6d436638af..115f8bff3a9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IdentList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IdentList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class IdentList extends TreeView { public IdentList(Pointer p) { super(p); } - public IdentList(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public IdentList(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal @Cast("torch::jit::List::iterator*") IdentListIterator begin(); public native @ByVal @Cast("torch::jit::List::iterator*") IdentListIterator end(); public native @Cast("bool") boolean empty(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IdentListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IdentListIterator.java index 5413c4b6271..5b9042a2a43 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IdentListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IdentListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class IdentListIterator extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public IdentListIterator(Pointer p) { super(p); } - public IdentListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it) { super((Pointer)null); allocate(it); } - private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it); + public IdentListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it) { super((Pointer)null); allocate(it); } + private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it); public native @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef IdentListIterator rhs); public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef IdentListIterator rhs); public native @ByVal @Name("operator *") Ident multiply(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IdentityImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IdentityImpl.java index 6d2f39723ff..ece9fd87a6c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IdentityImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IdentityImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Identity ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** A placeholder identity operator that is argument-insensitive. - * See https://pytorch.org/docs/master/generated/torch.nn.Identity.html to + * See https://pytorch.org/docs/main/generated/torch.nn.Identity.html to * learn about the exact behavior of this module. */ @Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class IdentityImpl extends IdentityImplCloneable { diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IdentityImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IdentityImplCloneable.java index a72416f178b..6dfe9b10f23 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IdentityImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IdentityImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class IdentityImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/If.java b/pytorch/src/gen/java/org/bytedeco/pytorch/If.java index d8b11fd5395..a630706a8ad 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/If.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/If.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -28,8 +29,8 @@ public class If extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public If(Pointer p) { super(p); } - public If(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public If(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr cond(); public native @ByVal StmtList trueBranch(); public native @ByVal StmtList falseBranch(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IncludeDispatchKeyGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IncludeDispatchKeyGuard.java index ec2668bfb0c..dd0e26ceb00 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IncludeDispatchKeyGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IncludeDispatchKeyGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IndexError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IndexError.java deleted file mode 100644 index df13f6e1eac..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IndexError.java +++ /dev/null @@ -1,30 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -// Used in ATen for out-of-bound indices that can reasonably only be detected -// lazily inside a kernel (See: advanced indexing). These turn into -// IndexError when they cross to Python. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class IndexError extends Error { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public IndexError(Pointer p) { super(p); } - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InferenceMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InferenceMode.java index af26c21b570..0f21933b1f1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InferenceMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InferenceMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InferredType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InferredType.java index 7509a5dfd27..44aed1cf94f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InferredType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InferredType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InlinedCallStack.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InlinedCallStack.java index d63e5642add..47710926a09 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InlinedCallStack.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InlinedCallStack.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -85,7 +86,7 @@ private native void allocate( @StdString @ByRef BytePointer function_name); // Return next element in the callstack list. - public native @ByVal @Cast("c10::optional*") InlinedCallStackOptional callee(); + public native @ByVal @Cast("std::optional*") InlinedCallStackOptional callee(); // Return module instance associated with the current element. public native @ByVal ModuleInstanceInfoOptional module_instance(); @@ -100,7 +101,7 @@ private native void allocate( // Return callstack as a vector of [Function, SourceRange] pairs. public native @Cast("torch::jit::InlinedCallStackEntry*") @StdVector LongVector vec(); - public native void setCallee(@ByVal @Cast("c10::optional*") InlinedCallStackOptional arg0); + public native void setCallee(@ByVal @Cast("std::optional*") InlinedCallStackOptional arg0); public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef InlinedCallStack rhs); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InlinedCallStackOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InlinedCallStackOptional.java index 0f4fcfa6110..1271b07041e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InlinedCallStackOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InlinedCallStackOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class InlinedCallStackOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InputArchive.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InputArchive.java index 648a08863a5..13d0b08664b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InputArchive.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InputArchive.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -92,12 +93,12 @@ public class InputArchive extends Pointer { * is not specified, the module is loaded to the original device. */ public native void load_from( @StdString BytePointer filename, - @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native void load_from( @StdString BytePointer filename); public native void load_from( @StdString String filename, - @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native void load_from( @StdString String filename); @@ -106,7 +107,7 @@ public native void load_from( * is not specified, the module is loaded to the original device. */ public native void load_from( @Cast("std::istream*") @ByRef Pointer stream, - @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native void load_from( @Cast("std::istream*") @ByRef Pointer stream); @@ -114,14 +115,14 @@ public native void load_from( public native void load_from( @Cast("const char*") BytePointer data, @Cast("size_t") long size, - @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native void load_from( @Cast("const char*") BytePointer data, @Cast("size_t") long size); public native void load_from( String data, @Cast("size_t") long size, - @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native void load_from( String data, @Cast("size_t") long size); @@ -130,7 +131,7 @@ public native void load_from( public native void load_from( @Const @ByRef Reader read_func, @Const @ByRef SizeTSupplier size_func, - @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native void load_from( @Const @ByRef Reader read_func, @Const @ByRef SizeTSupplier size_func); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImpl.java index f7cb4fd68f7..b7a232594fc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the InstanceNorm1d function. - * See https://pytorch.org/docs/master/nn.html#torch.nn.InstanceNorm1d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.InstanceNorm1d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::InstanceNorm1dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplBase.java index 266988ad110..479cc7b2713 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplBaseBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplBaseBase.java index 52692223f6f..9b16a248ab5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplBaseBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplBaseBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplCloneable.java index 43d71c7e334..a32a79ac70c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class InstanceNorm1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImpl.java index 0503dd3ce99..4f773b86141 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the InstanceNorm2d function. - * See https://pytorch.org/docs/master/nn.html#torch.nn.InstanceNorm2d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.InstanceNorm2d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::InstanceNorm2dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplBase.java index 0799736eeed..f824e1a019e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplBaseBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplBaseBase.java index 48ec9f48536..9d71e769a69 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplBaseBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplBaseBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplCloneable.java index 0af04b948d5..6d1e5157004 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class InstanceNorm2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImpl.java index 439452c1063..843e7183260 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the InstanceNorm3d function. - * See https://pytorch.org/docs/master/nn.html#torch.nn.InstanceNorm3d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.InstanceNorm3d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::InstanceNorm3dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplBase.java index eff6a757165..0f8de9a48d7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplBaseBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplBaseBase.java index ae5f7ccd368..de33f12b0d4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplBaseBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplBaseBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplCloneable.java index 000c2598f24..c3ee4a07aed 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNorm3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class InstanceNorm3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNormFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNormFuncOptions.java index 0a53176a984..58d51bc38cd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNormFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNormFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNormOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNormOptions.java index 8f3b1263360..421e71beb59 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNormOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InstanceNormOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Instruction.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Instruction.java index 8985c698da5..fae2c0e2202 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Instruction.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Instruction.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IntArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IntArrayRef.java index 1dab2380e86..4b0f145851b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IntArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IntArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IntOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IntOptional.java index a84b4fe3360..94fd87499db 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IntOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IntOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class IntOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IntPair.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IntPair.java new file mode 100644 index 00000000000..750425c20ac --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IntPair.java @@ -0,0 +1,41 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@NoOffset @Name("std::pair") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class IntPair extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public IntPair(Pointer p) { super(p); } + public IntPair(int firstValue, int secondValue) { this(); put(firstValue, secondValue); } + public IntPair() { allocate(); } + private native void allocate(); + public native @Name("operator =") @ByRef IntPair put(@ByRef IntPair x); + + + @MemberGetter public native int first(); public native IntPair first(int first); + @MemberGetter public native int second(); public native IntPair second(int second); + + public IntPair put(int firstValue, int secondValue) { + first(firstValue); + second(secondValue); + return this; + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IntSizedSmallVectorBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IntSizedSmallVectorBase.java index c648b4546de..bc98683252a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IntSizedSmallVectorBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IntSizedSmallVectorBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IntType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IntType.java index 5d62ce747ff..c0bfea46b94 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IntType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IntType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/IntTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/IntTypePtr.java index 7cd65569478..c7d331da198 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/IntTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/IntTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InterfaceType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InterfaceType.java index 4f0b9e1739f..18cc5b5c3c9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InterfaceType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InterfaceType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InterpolateFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InterpolateFuncOptions.java index 4dcce2e7316..1a2da535c0f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InterpolateFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InterpolateFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/InterpolateMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/InterpolateMode.java index 8cc9b78b307..536ad113f34 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/InterpolateMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/InterpolateMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaBatchDataset.java index 070b25c1bc2..8da6ce3e079 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDataset.java index 2026396a316..dfb5a286e04 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDatasetBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDatasetBase.java index 41227baa469..d00cbfe16bd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDatasetBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDatasetBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomDataLoader.java index 295fbc3ba70..90ea02e4230 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomDataLoaderBase.java index 488b2ee57aa..8672f9d7a3d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomTensorDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomTensorDataLoader.java index 09da0d62c0a..2c769b70667 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomTensorDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomTensorDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomTensorDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomTensorDataLoaderBase.java index 10ca3e2a4e9..08edb2f9640 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomTensorDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedRandomTensorDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialDataLoader.java index 1c2db328bac..5fe5ec2ac26 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialDataLoaderBase.java index 8a044a5f179..4413ecabdf4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialTensorDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialTensorDataLoader.java index 73ecc0d78d8..c9607f844f9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialTensorDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialTensorDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialTensorDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialTensorDataLoaderBase.java index bfa7dc3d5a0..ab7ac254cef 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialTensorDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaDistributedSequentialTensorDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomDataLoader.java index a65d8c5402b..9b736dd1e89 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomDataLoaderBase.java index dc69c0e0228..fd149a207d1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomTensorDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomTensorDataLoader.java index cc79ac4eeca..e32e2f411ed 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomTensorDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomTensorDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomTensorDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomTensorDataLoaderBase.java index c68992e046a..df1945c65ea 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomTensorDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaRandomTensorDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialDataLoader.java index 059f1d82b21..2deb0218fbd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialDataLoaderBase.java index 1debcd6b30e..223a7598988 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialTensorDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialTensorDataLoader.java index a0b1d5666c6..32ecf7d9e1e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialTensorDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialTensorDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialTensorDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialTensorDataLoaderBase.java index ca4b35e907b..17022d71ddb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialTensorDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaSequentialTensorDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulBatchDataset.java index 245c8fc4a08..9b12839763f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("torch::data::datasets::BatchDataset,c10::optional > >,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("torch::data::datasets::BatchDataset,std::optional > >,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class JavaStatefulBatchDataset extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataLoader.java index 39859457979..b11a6cffb57 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataLoaderBase.java index dcc9d7c8fcf..5f80b7bbddc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataset.java index 261b70c0c41..625f16faaf9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDatasetBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDatasetBase.java index d0855445e2f..cf91107a6a6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDatasetBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulDatasetBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorBatchDataset.java index 04b6e7ba23b..33b5d0556b5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("torch::data::datasets::BatchDataset,c10::optional > >,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("torch::data::datasets::BatchDataset,std::optional > >,size_t>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class JavaStatefulTensorBatchDataset extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataLoader.java index 7c766f7e78e..e0b46946c10 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataLoaderBase.java index fb7bd820ced..71731818162 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataset.java index 0ba6f97f161..c54901a66e2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDatasetBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDatasetBase.java index e62d876c52d..a3914e76670 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDatasetBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStatefulTensorDatasetBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamBatchDataset.java index 5aa418c4099..851ed1e25c0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataLoader.java index 8f200364f77..6ad894e0292 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataLoaderBase.java index 96cc7a19451..c0fb6558aa1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataset.java index 0e0926f475d..947e2aad8a9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorBatchDataset.java index 6a945fbd217..a1f32e91bf0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataLoader.java index 6207a3bccda..b7513e95473 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataLoaderBase.java index d7909b69959..9632e8791c5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataset.java index bb11245e6e3..49dda6a35ff 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaStreamTensorDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorBatchDataset.java index df70a3e93a9..a2dd76a09b5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorDataset.java index 18748e3c185..9b62f5e4d7d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorDatasetBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorDatasetBase.java index 2555a2499ce..853fc4fb348 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorDatasetBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JavaTensorDatasetBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JitModule.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JitModule.java index fe4519db301..d94b8110966 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JitModule.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JitModule.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -35,8 +36,8 @@ public class JitModule extends JitObject { public JitModule(@ByVal QualifiedName class_name) { super((Pointer)null); allocate(class_name); } private native void allocate(@ByVal QualifiedName class_name); - public JitModule(@SharedPtr CompilationUnit cu, @Const @SharedPtr("c10::ClassType") @ByRef ClassType type) { super((Pointer)null); allocate(cu, type); } - private native void allocate(@SharedPtr CompilationUnit cu, @Const @SharedPtr("c10::ClassType") @ByRef ClassType type); + public JitModule(@SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu, @Const @SharedPtr("c10::ClassType") @ByRef ClassType type) { super((Pointer)null); allocate(cu, type); } + private native void allocate(@SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu, @Const @SharedPtr("c10::ClassType") @ByRef ClassType type); public JitModule() { super((Pointer)null); allocate(); } private native void allocate(); public JitModule(@Const @ByRef JitModule arg0) { super((Pointer)null); allocate(arg0); } @@ -44,20 +45,20 @@ public class JitModule extends JitObject { public native @ByRef @Name("operator =") JitModule put(@Const @ByRef JitModule arg0); public JitModule( @ByVal QualifiedName arg0, - @SharedPtr CompilationUnit cu, + @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu, @Cast("bool") boolean shouldMangle/*=false*/) { super((Pointer)null); allocate(arg0, cu, shouldMangle); } private native void allocate( @ByVal QualifiedName arg0, - @SharedPtr CompilationUnit cu, + @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu, @Cast("bool") boolean shouldMangle/*=false*/); public JitModule( @ByVal QualifiedName arg0, - @SharedPtr CompilationUnit cu) { super((Pointer)null); allocate(arg0, cu); } + @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu) { super((Pointer)null); allocate(arg0, cu); } private native void allocate( @ByVal QualifiedName arg0, - @SharedPtr CompilationUnit cu); - public JitModule(@ByVal @Cast("torch::jit::ModulePtr*") ObjPtr module_value) { super((Pointer)null); allocate(module_value); } - private native void allocate(@ByVal @Cast("torch::jit::ModulePtr*") ObjPtr module_value); + @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu); + public JitModule(@IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj module_value) { super((Pointer)null); allocate(module_value); } + private native void allocate(@IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj module_value); public native void set_optimized(@Cast("bool") boolean o); @@ -224,7 +225,7 @@ public native void _save_for_mobile( public native @ByVal JitModule copy(); - public native @ByVal JitModule deepcopy(@ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + public native @ByVal JitModule deepcopy(@ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @ByVal JitModule deepcopy(); // Clones both the underlying `ClassType` and the module instance(data), this diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JitNode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JitNode.java index 06face20c73..ff7b438688a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JitNode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JitNode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -67,7 +68,7 @@ public class JitNode extends Pointer { // Copies the source range, scope and callstack from another node. public native JitNode copyMetadata(JitNode from); - public native @ByVal @Cast("c10::optional*") InlinedCallStackOptional callstack(); + public native @ByVal @Cast("std::optional*") InlinedCallStackOptional callstack(); public native void setCallStack(@ByVal @Cast("torch::jit::InlinedCallStackPtr*") Pointer cs); // NB: This returns an ArrayRef; that means that it will diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JitNodeVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JitNodeVector.java index 6c9098e1fa1..12212c0193a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JitNodeVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JitNodeVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JitNodeWrap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JitNodeWrap.java index 63d14aa06cc..74446cd0482 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JitNodeWrap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JitNodeWrap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JitObject.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JitObject.java index 30f4cf43678..f7a80f27de5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JitObject.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JitObject.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -38,13 +39,13 @@ public class JitObject extends Pointer { public JitObject(@Const @ByRef JitObject arg0) { super((Pointer)null); allocate(arg0); } private native void allocate(@Const @ByRef JitObject arg0); public native @ByRef @Name("operator =") JitObject put(@Const @ByRef JitObject arg0); - public JitObject(@ByVal @Cast("torch::jit::ObjectPtr*") ObjPtr _ivalue) { super((Pointer)null); allocate(_ivalue); } - private native void allocate(@ByVal @Cast("torch::jit::ObjectPtr*") ObjPtr _ivalue); - public JitObject(@SharedPtr CompilationUnit cu, @Const @SharedPtr("c10::ClassType") @ByRef ClassType type) { super((Pointer)null); allocate(cu, type); } - private native void allocate(@SharedPtr CompilationUnit cu, @Const @SharedPtr("c10::ClassType") @ByRef ClassType type); + public JitObject(@IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj _ivalue) { super((Pointer)null); allocate(_ivalue); } + private native void allocate(@IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj _ivalue); + public JitObject(@SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu, @Const @SharedPtr("c10::ClassType") @ByRef ClassType type) { super((Pointer)null); allocate(cu, type); } + private native void allocate(@SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu, @Const @SharedPtr("c10::ClassType") @ByRef ClassType type); - public native @ByVal @Cast("torch::jit::ObjectPtr*") ObjPtr _ivalue(); + public native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj _ivalue(); public native @SharedPtr("c10::ClassType") @ByVal ClassType type(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/JitString.java b/pytorch/src/gen/java/org/bytedeco/pytorch/JitString.java index 4b35fe0237b..7ec8c57bcbd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/JitString.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/JitString.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossImpl.java index ae9b837f78c..49bd28a5f02 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** The Kullback-Leibler divergence loss measure - * See https://pytorch.org/docs/master/nn.html#torch.nn.KLDivLoss to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.KLDivLoss to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::KLDivLossOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossImplCloneable.java index af8dbb9e3de..2b462bda9ff 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class KLDivLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossOptions.java index f54e4656f36..b89ac896e08 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossReduction.java b/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossReduction.java index 0abe9659440..9b6d2e6b617 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossReduction.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/KLDivLossReduction.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/KernelFunction.java b/pytorch/src/gen/java/org/bytedeco/pytorch/KernelFunction.java index b7c584749f0..d6206d1af10 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/KernelFunction.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/KernelFunction.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossImpl.java index cf520d1d944..374c1b0ff13 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ /** Creates a criterion that measures the mean absolute error (MAE) between each * element in the input : math :{@code x} and target : {@code y}. - * See https://pytorch.org/docs/master/nn.html#torch.nn.L1Loss to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.L1Loss to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::L1LossOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossImplCloneable.java index cfe41ff633c..942f23243e0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class L1LossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossOptions.java index 1ad151da764..0acd8a2138c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/L1LossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGS.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGS.java index 92b66abae10..2d5bf13d678 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGS.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGS.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGSOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGSOptions.java index 7c7222846a3..e512f910d34 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGSOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGSOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGSParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGSParamState.java index 05ac147c07e..59d95a980da 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGSParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LBFGSParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImpl.java index 1729542d918..c9e52ecb041 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ LPPool1d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the LPPool1d function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.LPPool1d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.LPPool1d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::LPPool1dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImplBase.java index 776a2469992..424e25b7310 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImplCloneable.java index dd514681b85..6f9c31f27bd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class LPPool1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dOptions.java index daf4df22add..320a27874d6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImpl.java index 28d8d92c014..41916686d00 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ LPPool2d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the LPPool2d function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.LPPool2d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.LPPool2d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::LPPool2dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImplBase.java index 0601d774cdd..244d83a543e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImplCloneable.java index 33ce8a87e87..8a669e77598 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class LPPool2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dOptions.java index b582a6585be..9894f4dde71 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImpl.java index 461a6d50ebd..cb991aeb2a6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ LPPool3d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the LPPool3d function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.LPPool3d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.LPPool3d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::LPPool3dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImplBase.java index f96c6ef80ff..13974b2e7ad 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImplCloneable.java index f312956c232..e9c68f80705 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class LPPool3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dOptions.java index db613a833f9..b826c4a4831 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LPPool3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LRScheduler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LRScheduler.java index 7bdf0473b43..6d880cd73c5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LRScheduler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LRScheduler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImpl.java index ef4b2bd1f80..7363b167bbd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** A long short-term memory (LSTM) cell. - * See https://pytorch.org/docs/master/nn.html#torch.nn.LSTMCell to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.LSTMCell to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::LSTMCellOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImplBase.java index 5001ee130b5..3ce4080c53e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImplCloneable.java index e73e97f049c..bd36fcba0d4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class LSTMCellImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellOptions.java index 88e36a2c648..9e66070f542 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMCellOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImpl.java index 6ae33ab51f3..4bfbc5f7241 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ LSTM ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** A multi-layer long-short-term-memory (LSTM) module. - * See https://pytorch.org/docs/master/generated/torch.nn.LSTM.html to learn + * See https://pytorch.org/docs/main/generated/torch.nn.LSTM.html to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::LSTMOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImplBase.java index 967a7e27eb3..7bb9a19e8ac 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImplCloneable.java index 4e991af3709..095b8b5a2a0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class LSTMImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMOptions.java index 2b8396eb447..856dc19e239 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LSTMOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormFuncOptions.java index 29ef7a627fb..cc19f8f76fd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormImpl.java index 63b866d074c..813fd8ac564 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ /** Applies Layer Normalization over a mini-batch of inputs as described in * the paper {@code Layer Normalization}_ . - * See https://pytorch.org/docs/master/nn.html#torch.nn.LayerNorm to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.LayerNorm to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::LayerNormOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormImplCloneable.java index fc65da9c28b..957be713423 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class LayerNormImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormOptions.java index b20a2c1c111..5a25b5a1878 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LayerNormOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutEnumerationType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutEnumerationType.java index 202dbfc1a82..f803a9d7a00 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutEnumerationType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutEnumerationType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutOptional.java index 4884bac4930..934e92212a2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class LayoutOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutType.java index b98cbd35cae..8068e144676 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutTypePtr.java index 503172897cb..8095bbd42d5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LayoutTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUImpl.java index 0ef956903d6..b4f8752c6a0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ LeakyReLU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the LeakyReLU function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.LeakyReLU to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.LeakyReLU to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::LeakyReLUOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUImplCloneable.java index c6994a89ce4..674c19f672f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class LeakyReLUImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUOptions.java index 06a8b01e881..bdbdc1b6591 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LeakyReLUOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LegacyTensorConstructor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LegacyTensorConstructor.java index 62d9a701436..b6a21c427e4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LegacyTensorConstructor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LegacyTensorConstructor.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Library.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Library.java index bba6dc0b4e8..367d1d04bdc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Library.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Library.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail @@ -141,13 +142,19 @@ private native void allocate( /** Declares that for all operators that are subsequently def'ed, their - * abstract impls may be found in the given Python module (pymodule). - * This registers some help text that is used if the abstract impl + * fake impls may be found in the given Python module (pymodule). + * This registers some help text that is used if the fake impl * cannot be found. * * Args: * - pymodule: the python module * - context: We may include this in the error message. */ + public native @ByRef Library set_python_module(@Cast("const char*") BytePointer pymodule, @Cast("const char*") BytePointer context/*=""*/); + public native @ByRef Library set_python_module(@Cast("const char*") BytePointer pymodule); + public native @ByRef Library set_python_module(String pymodule, String context/*=""*/); + public native @ByRef Library set_python_module(String pymodule); + + /** Deprecated; use set_python_module instead */ /// /// diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LinAlgError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LinAlgError.java deleted file mode 100644 index bb4096bf787..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LinAlgError.java +++ /dev/null @@ -1,29 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -// Used for numerical errors from the linalg module. These -// turn into LinAlgError when they cross into Python. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class LinAlgError extends Error { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public LinAlgError(Pointer p) { super(p); } - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LinearImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LinearImpl.java index 662ec8c9c82..a453e233464 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LinearImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LinearImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Linear ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies a linear transformation with optional bias. - * See https://pytorch.org/docs/master/generated/torch.nn.Linear.html to learn + * See https://pytorch.org/docs/main/generated/torch.nn.Linear.html to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::LinearOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LinearImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LinearImplCloneable.java index 37729690823..f8b44790cdf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LinearImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LinearImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class LinearImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LinearOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LinearOptions.java index 02f619365c6..e4476751522 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LinearOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LinearOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ListComp.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ListComp.java index 3d9773dda0b..ac5df24f600 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ListComp.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ListComp.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class ListComp extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public ListComp(Pointer p) { super(p); } - public ListComp(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public ListComp(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr elt(); public native @ByVal Expr target(); public native @ByVal Expr iter(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ListLiteral.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ListLiteral.java index 0847c7a6ecf..f8cd5224896 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ListLiteral.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ListLiteral.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class ListLiteral extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public ListLiteral(Pointer p) { super(p); } - public ListLiteral(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public ListLiteral(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal ExprList inputs(); public static native @ByVal ListLiteral create( @Const @ByRef SourceRange range, diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ListSingleElementType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ListSingleElementType.java index 6a9863187d2..fe0199ad3df 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ListSingleElementType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ListSingleElementType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ListType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ListType.java index 07555921026..4fc00310d1a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ListType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ListType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LocalDispatchKeySet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LocalDispatchKeySet.java index 62f2fea5ea9..cbfc08c0cc8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LocalDispatchKeySet.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LocalDispatchKeySet.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormImpl.java index 69250a439db..0952f9916ee 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ /** Applies local response normalization over an input signal composed * of several input planes, where channels occupy the second dimension. * Applies normalization across channels. - * See https://pytorch.org/docs/master/nn.html#torch.nn.LocalResponseNorm to + * See https://pytorch.org/docs/main/nn.html#torch.nn.LocalResponseNorm to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::LocalResponseNormOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormImplCloneable.java index 79d128d5ebc..9cce546e16c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class LocalResponseNormImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormOptions.java index b241a366ba3..c0f98afde8c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LocalResponseNormOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSigmoidImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSigmoidImpl.java index 0669be66e06..1f20978e017 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSigmoidImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSigmoidImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ LogSigmoid ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the LogSigmoid function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.LogSigmoid to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.LogSigmoid to learn * about the exact behavior of this module. */ @Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class LogSigmoidImpl extends LogSigmoidImplCloneable { diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSigmoidImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSigmoidImplCloneable.java index 3191c9891e5..654cfcfa70f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSigmoidImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSigmoidImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class LogSigmoidImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxFuncOptions.java index cd5c07922fb..a85a1c0b4f1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxImpl.java index 85f20ecb888..0095fa019af 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ LogSoftmax ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the LogSoftmax function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.LogSoftmax to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.LogSoftmax to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::LogSoftmaxOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxImplCloneable.java index 970ca32f670..f726fa98aad 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class LogSoftmaxImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxOptions.java index 970ae9b2ff1..37bc2af5e66 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LogSoftmaxOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Logger.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Logger.java new file mode 100644 index 00000000000..d7054b50e36 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Logger.java @@ -0,0 +1,168 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class Logger extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public Logger(Pointer p) { super(p); } + + public Logger(@SharedPtr Reducer reducer) { super((Pointer)null); allocate(reducer); } + @SharedPtr @Name("std::make_shared") private native void allocate(@SharedPtr Reducer reducer); + // Set logging data that can be got during DistributedDataParallel + // construction time. + public native void set_construction_data_and_log( + @StdString BytePointer module_name, + @StdVector IntPointer device_ids, + int output_device, + @Cast("bool") boolean broadcast_buffers, + @Cast("bool") boolean has_sync_bn, + @Cast("bool") boolean static_graph); + public native void set_construction_data_and_log( + @StdString String module_name, + @StdVector IntBuffer device_ids, + int output_device, + @Cast("bool") boolean broadcast_buffers, + @Cast("bool") boolean has_sync_bn, + @Cast("bool") boolean static_graph); + public native void set_construction_data_and_log( + @StdString BytePointer module_name, + @StdVector int[] device_ids, + int output_device, + @Cast("bool") boolean broadcast_buffers, + @Cast("bool") boolean has_sync_bn, + @Cast("bool") boolean static_graph); + public native void set_construction_data_and_log( + @StdString String module_name, + @StdVector IntPointer device_ids, + int output_device, + @Cast("bool") boolean broadcast_buffers, + @Cast("bool") boolean has_sync_bn, + @Cast("bool") boolean static_graph); + public native void set_construction_data_and_log( + @StdString BytePointer module_name, + @StdVector IntBuffer device_ids, + int output_device, + @Cast("bool") boolean broadcast_buffers, + @Cast("bool") boolean has_sync_bn, + @Cast("bool") boolean static_graph); + public native void set_construction_data_and_log( + @StdString String module_name, + @StdVector int[] device_ids, + int output_device, + @Cast("bool") boolean broadcast_buffers, + @Cast("bool") boolean has_sync_bn, + @Cast("bool") boolean static_graph); + + public native void set_static_graph(); + + // An interface for users to get DDPLoggingData and log them + // in the applications. Explanation of logging fields are in + // "struct DDPLoggingData" of "torch/c10/util/Logging.h". + public native @ByVal DDPLoggingData get_ddp_logging_data(); + + // Stream insertion operator for logging data to stream under + // TORCH_DISTRIBUTED_DEBUG. + + + // Set environment variables. + public native void set_env_variables(); + // Set parameters stats. + public native void set_parameter_stats(); + // Get size of each bucket (Bytes). + public native @ByVal @Cast("std::vector*") LongVector get_bucket_sizes(); + // Get variable indices for each bucket. + public native @ByVal SizeTVectorVector get_per_bucket_variable_indices(); + // Set comm. hook, if used + public native void set_comm_hook(@StdString BytePointer hook); + public native void set_comm_hook(@StdString String hook); + // Set running with uneven input detection (model.join() context manager) + public native void set_uneven_input_join(); + + // Reset performance stats at current iteration + public native void reset_performance_stats(); + + // Calculate avg stats using cpu timer and gpu timer + // that has been recorded in reducer. + public native void calculate_avg_time( + @Cast("int64_t*") @ByRef LongPointer avg_time, + @Cast("int64_t*") @ByRef LongPointer time_duration, + @ByRef Timer timer, + Timer.Event start_event, + Timer.Event end_event); + public native void calculate_avg_time( + @Cast("int64_t*") @ByRef LongBuffer avg_time, + @Cast("int64_t*") @ByRef LongBuffer time_duration, + @ByRef Timer timer, + @Cast("c10d::Timer::Event") byte start_event, + @Cast("c10d::Timer::Event") byte end_event); + public native void calculate_avg_time( + @Cast("int64_t*") @ByRef long[] avg_time, + @Cast("int64_t*") @ByRef long[] time_duration, + @ByRef Timer timer, + Timer.Event start_event, + Timer.Event end_event); + public native void calculate_avg_time( + @Cast("int64_t*") @ByRef LongPointer avg_time, + @Cast("int64_t*") @ByRef LongPointer time_duration, + @ByRef Timer timer, + @Cast("c10d::Timer::Event") byte start_event, + @Cast("c10d::Timer::Event") byte end_event); + public native void calculate_avg_time( + @Cast("int64_t*") @ByRef LongBuffer avg_time, + @Cast("int64_t*") @ByRef LongBuffer time_duration, + @ByRef Timer timer, + Timer.Event start_event, + Timer.Event end_event); + public native void calculate_avg_time( + @Cast("int64_t*") @ByRef long[] avg_time, + @Cast("int64_t*") @ByRef long[] time_duration, + @ByRef Timer timer, + @Cast("c10d::Timer::Event") byte start_event, + @Cast("c10d::Timer::Event") byte end_event); + + // Set the absolute time of the event that has been recorded in reducer. + public native void set_event_time(@Cast("int64_t*") @ByRef LongPointer event_time, @ByRef Timer timer, Timer.Event event); + public native void set_event_time(@Cast("int64_t*") @ByRef LongBuffer event_time, @ByRef Timer timer, @Cast("c10d::Timer::Event") byte event); + public native void set_event_time(@Cast("int64_t*") @ByRef long[] event_time, @ByRef Timer timer, Timer.Event event); + public native void set_event_time(@Cast("int64_t*") @ByRef LongPointer event_time, @ByRef Timer timer, @Cast("c10d::Timer::Event") byte event); + public native void set_event_time(@Cast("int64_t*") @ByRef LongBuffer event_time, @ByRef Timer timer, Timer.Event event); + public native void set_event_time(@Cast("int64_t*") @ByRef long[] event_time, @ByRef Timer timer, @Cast("c10d::Timer::Event") byte event); + // Set stats that can be collected only during + // training loop. It is called at the beginning of forward call + // to record the run time stats of sampled iterations that previously ran. + // GPU performance stats are collected only for single process + // single device program and single device module right now. + // TODO to support single process multiple devices and multi device modules, + // events need to be created and recorded on multiple devices. + public native void set_runtime_stats_and_log(); + + // Called when DDP/reducer is failing with an error. The + // logging data structure will have two fields filled: "has_error" indicating + // that this iteration encountered an error and other fields are not valid, + // and "error", a string which contains the error message that DDP failed + // with. + + // When running without static graph, called when reducer is destroyed to log + // if graph was actually static and is a candidate for static graph + // optimization. + public native void log_if_graph_static(@Cast("bool") boolean is_static); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LoggerOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LoggerOptional.java new file mode 100644 index 00000000000..906f169fba6 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LoggerOptional.java @@ -0,0 +1,36 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class LoggerOptional extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public LoggerOptional(Pointer p) { super(p); } + public LoggerOptional(Logger value) { this(); put(value); } + public LoggerOptional() { allocate(); } + private native void allocate(); + public native @Name("operator =") @ByRef LoggerOptional put(@ByRef LoggerOptional x); + + public native boolean has_value(); + public native void reset(); + public native @Name("value") @WeakPtr("c10d::Logger") Logger get(); + @ValueSetter public native LoggerOptional put(@WeakPtr("c10d::Logger") Logger value); +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRef.java index d4b950b9ae8..0f652b21044 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRefOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRefOptional.java index 2d59151673a..69f27dccca9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRefOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRefOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class LongArrayRefOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRefVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRefVector.java new file mode 100644 index 00000000000..9efa8754e55 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRefVector.java @@ -0,0 +1,111 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class LongArrayRefVector extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public LongArrayRefVector(Pointer p) { super(p); } + public LongArrayRefVector(LongArrayRef value) { this(1); put(0, value); } + public LongArrayRefVector(LongArrayRef ... array) { this(array.length); put(array); } + public LongArrayRefVector(@Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... value) { this(1); put(0, value); } + public LongArrayRefVector(@Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] ... array) { this(array.length); put(array); } + public LongArrayRefVector() { allocate(); } + public LongArrayRefVector(long n) { allocate(n); } + private native void allocate(); + private native void allocate(@Cast("size_t") long n); + public native @Name("operator =") @ByRef LongArrayRefVector put(@ByRef LongArrayRefVector x); + + public boolean empty() { return size() == 0; } + public native long size(); + public void clear() { resize(0); } + public native void resize(@Cast("size_t") long n); + + public LongArrayRef front() { return get(0); } + public LongArrayRef back() { return get(size() - 1); } + @Index(function = "at") public native @ByRef LongArrayRef get(@Cast("size_t") long i); + public native LongArrayRefVector put(@Cast("size_t") long i, LongArrayRef value); + @ValueSetter @Index(function = "at") public native LongArrayRefVector put(@Cast("size_t") long i, @ByRef @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... value); + + public native @ByVal Iterator insert(@ByVal Iterator pos, @ByRef LongArrayRef value); + public native @ByVal Iterator erase(@ByVal Iterator pos); + public native @ByVal Iterator begin(); + public native @ByVal Iterator end(); + @NoOffset @Name("iterator") public static class Iterator extends Pointer { + public Iterator(Pointer p) { super(p); } + public Iterator() { } + + public native @Name("operator ++") @ByRef Iterator increment(); + public native @Name("operator ==") boolean equals(@ByRef Iterator it); + public native @Name("operator *") @ByRef @Const LongArrayRef get(); + } + + public LongArrayRef[] get() { + LongArrayRef[] array = new LongArrayRef[size() < Integer.MAX_VALUE ? (int)size() : Integer.MAX_VALUE]; + for (int i = 0; i < array.length; i++) { + array[i] = get(i); + } + return array; + } + @Override public String toString() { + return java.util.Arrays.toString(get()); + } + + public LongArrayRef pop_back() { + long size = size(); + LongArrayRef value = get(size - 1); + resize(size - 1); + return value; + } + public LongArrayRefVector push_back(LongArrayRef value) { + long size = size(); + resize(size + 1); + return put(size, value); + } + public LongArrayRefVector put(LongArrayRef value) { + if (size() != 1) { resize(1); } + return put(0, value); + } + public LongArrayRefVector put(LongArrayRef ... array) { + if (size() != array.length) { resize(array.length); } + for (int i = 0; i < array.length; i++) { + put(i, array[i]); + } + return this; + } + + public LongArrayRefVector push_back(long... value) { + long size = size(); + resize(size + 1); + return put(size, value); + } + public LongArrayRefVector put(long... value) { + if (size() != 1) { resize(1); } + return put(0, value); + } + public LongArrayRefVector put(long[] ... array) { + if (size() != array.length) { resize(array.length); } + for (int i = 0; i < array.length; i++) { + put(i, array[i]); + } + return this; + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongElementReference.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongElementReference.java index fc7ca966b78..0bc56b6aac5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongElementReference.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongElementReference.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ public class LongElementReference extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public LongElementReference(Pointer p) { super(p); } - public native @Name("operator std::conditional_t::type>::value,const int64_t&,int64_t>") long getLong(); + public native @Name("operator std::conditional_t::type>,const int64_t&,int64_t>") long getLong(); @@ -35,7 +36,7 @@ public class LongElementReference extends Pointer { public native @Const @ByRef IValue get(); - private static native @Namespace void swap(@ByRef(true) LongElementReference lhs, @ByRef(true) LongElementReference rhs); + private static native @Namespace @NoException(true) void swap(@ByRef(true) LongElementReference lhs, @ByRef(true) LongElementReference rhs); public void swap(LongElementReference rhs) { swap(this, rhs); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongExpandingArrayOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongExpandingArrayOptional.java index af466012305..a5480ab8427 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongExpandingArrayOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongExpandingArrayOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class LongExpandingArrayOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongList.java index 9fd2ff51e1a..64e4ea63767 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::List") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::List") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class LongList extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongListIterator.java index 33f3cb356d2..a1b830db48b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptional.java index 6997e1a7028..194bc1981e9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class LongOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptionalArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptionalArrayRef.java index 67d61803f21..b19c2175486 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptionalArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptionalArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::ArrayRef >") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::ArrayRef >") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class LongOptionalArrayRef extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptionalVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptionalVector.java index 3c9e6f9e995..aa09fc6f9c7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptionalVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongOptionalVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class LongOptionalVector extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorBase.java index c7c35948854..b56e7dd8726 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,5 +26,7 @@ public class LongSmallVectorBase extends LongSmallVectorCommon { public native void push_back(@Cast("const int64_t") long Elt); + // NOLINTNEXTLINE(cppcoreguidelines-rvalue-reference-param-not-moved) + public native void pop_back(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorCommon.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorCommon.java index d14ae147f24..f7e77a8dead 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorCommon.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorCommon.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorImpl.java index fe099d243b2..7ae375abc42 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongSmallVectorImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -55,9 +56,9 @@ public class LongSmallVectorImpl extends LongSmallVectorBase { public native void assign(@Const @ByRef LongSmallVectorImpl RHS); - public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") LongPointer erase(@ByVal @Cast("c10::SmallVectorImpl::const_iterator*") LongPointer CI); + public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") LongPointer erase(@ByVal @Cast("c10::SmallVectorImpl::iterator*") LongPointer I); - public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") LongPointer erase(@ByVal @Cast("c10::SmallVectorImpl::const_iterator*") LongPointer CS, @ByVal @Cast("c10::SmallVectorImpl::const_iterator*") LongPointer CE); + public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") LongPointer erase(@ByVal @Cast("c10::SmallVectorImpl::iterator*") LongPointer S, @ByVal @Cast("c10::SmallVectorImpl::iterator*") LongPointer E); public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") LongPointer insert(@ByVal @Cast("c10::SmallVectorImpl::iterator*") LongPointer I, @Cast("int64_t&&") long Elt); public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") LongPointer insert(@ByVal @Cast("c10::SmallVectorImpl::iterator*") LongPointer I, long NumToInsert, long Elt); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongVaryingShape.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongVaryingShape.java index e8f0e3a3d07..06d4cdb1fb3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongVaryingShape.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongVaryingShape.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,8 +33,8 @@ public class LongVaryingShape extends Pointer { public LongVaryingShape(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... vec) { super((Pointer)null); allocate(vec); } private native void allocate(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... vec); - public LongVaryingShape(@ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional size) { super((Pointer)null); allocate(size); } - private native void allocate(@ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional size); + public LongVaryingShape(@ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional size) { super((Pointer)null); allocate(size); } + private native void allocate(@ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional size); public LongVaryingShape() { super((Pointer)null); allocate(); } private native void allocate(); @@ -49,7 +50,7 @@ public class LongVaryingShape extends Pointer { public native @ByVal SizeTOptional size(); - public native @Cast("const c10::optional::ListOfOptionalElements>*") @ByRef Pointer sizes(); + public native @Cast("const std::optional::ListOfOptionalElements>*") @ByRef Pointer sizes(); public native @ByVal LongVaryingShape merge(@Const @ByRef LongVaryingShape other); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongVector.java index 91bd426e8e9..87af0c7aa7c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LongVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LongVectorOptional.java index a16d2585006..c01ab91dd6e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LongVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LongVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class LongVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/LossReduction.java b/pytorch/src/gen/java/org/bytedeco/pytorch/LossReduction.java index 7cd8452bd09..6c0dcc935a0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/LossReduction.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/LossReduction.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ORTHooksArgs.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MAIAHooksArgs.java similarity index 79% rename from pytorch/src/gen/java/org/bytedeco/pytorch/ORTHooksArgs.java rename to pytorch/src/gen/java/org/bytedeco/pytorch/MAIAHooksArgs.java index f8ec14fbebb..7c185241eca 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ORTHooksArgs.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MAIAHooksArgs.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,9 +22,9 @@ // NB: dummy argument to suppress "ISO C++11 requires at least one argument // for the "..." in a variadic macro" @Namespace("at") @Opaque @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class ORTHooksArgs extends Pointer { +public class MAIAHooksArgs extends Pointer { /** Empty constructor. Calls {@code super((Pointer)null)}. */ - public ORTHooksArgs() { super((Pointer)null); } + public MAIAHooksArgs() { super((Pointer)null); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public ORTHooksArgs(Pointer p) { super(p); } + public MAIAHooksArgs(Pointer p) { super(p); } } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ORTHooksInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MAIAHooksInterface.java similarity index 66% rename from pytorch/src/gen/java/org/bytedeco/pytorch/ORTHooksInterface.java rename to pytorch/src/gen/java/org/bytedeco/pytorch/MAIAHooksInterface.java index 77ee5d7ae7b..c44b04bacd9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ORTHooksInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MAIAHooksInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,21 +22,21 @@ // NB: Class must live in `at` due to limitations of Registry.h. @Namespace("at") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class ORTHooksInterface extends Pointer { +public class MAIAHooksInterface extends Pointer { static { Loader.load(); } /** Default native constructor. */ - public ORTHooksInterface() { super((Pointer)null); allocate(); } + public MAIAHooksInterface() { super((Pointer)null); allocate(); } /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public ORTHooksInterface(long size) { super((Pointer)null); allocateArray(size); } + public MAIAHooksInterface(long size) { super((Pointer)null); allocateArray(size); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public ORTHooksInterface(Pointer p) { super(p); } + public MAIAHooksInterface(Pointer p) { super(p); } private native void allocate(); private native void allocateArray(long size); - @Override public ORTHooksInterface position(long position) { - return (ORTHooksInterface)super.position(position); + @Override public MAIAHooksInterface position(long position) { + return (MAIAHooksInterface)super.position(position); } - @Override public ORTHooksInterface getPointer(long i) { - return new ORTHooksInterface((Pointer)this).offsetAddress(i); + @Override public MAIAHooksInterface getPointer(long i) { + return new MAIAHooksInterface((Pointer)this).offsetAddress(i); } // This should never actually be implemented, but it is used to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MNIST.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MNIST.java index af750d8ff58..00f66ae6490 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MNIST.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MNIST.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTBatchDataset.java index eca20ed0825..8b2d69f03c4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTDataset.java index 21f066ab3df..9792d6bdd02 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTMapBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTMapBatchDataset.java index fa64a1a9781..e74b0c5bf73 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTMapBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTMapBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTMapDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTMapDataset.java index 141e67dc680..aa70bcefc22 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTMapDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTMapDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTRandomDataLoader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTRandomDataLoader.java index 2ef299f71b6..c08f37a3169 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTRandomDataLoader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTRandomDataLoader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTRandomDataLoaderBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTRandomDataLoaderBase.java index dc9d31b0d46..430b440123c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTRandomDataLoaderBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MNISTRandomDataLoaderBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MPSHooksArgs.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MPSHooksArgs.java index 51b6a473d88..9653a3c32dc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MPSHooksArgs.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MPSHooksArgs.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MPSHooksInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MPSHooksInterface.java index f53deee0912..70ef363762f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MPSHooksInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MPSHooksInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossImpl.java index 4b4f624553e..2456ef6ce14 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ /** Creates a criterion that measures the mean squared error (squared L2 norm) * between each element in the input :math:{@code x} and target :math:{@code y}. - * See https://pytorch.org/docs/master/nn.html#torch.nn.MSELoss to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.MSELoss to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::MSELossOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossImplCloneable.java index 92f02ef33f7..c6ab7a9b589 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MSELossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossOptions.java index 3ed9281e725..3fd02d19b52 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MSELossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MTIAHooksInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MTIAHooksInterface.java index 6c314249e46..b662bc88a4b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MTIAHooksInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MTIAHooksInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -36,13 +37,34 @@ public class MTIAHooksInterface extends AcceleratorHooksInterface { return new MTIAHooksInterface((Pointer)this).offsetAddress(i); } +// this fails the implementation if MTIAHooks functions are called, but +// MTIA backend is not present. +// #define FAIL_MTIAHOOKS_FUNC(func) +// TORCH_CHECK(false, "Cannot execute ", func, "() without MTIA backend."); public native void initMTIA(); public native @Cast("bool") boolean hasMTIA(); + public native @Cast("c10::DeviceIndex") byte deviceCount(); + + public native void deviceSynchronize(@Cast("c10::DeviceIndex") byte device_index); + public native @StdString BytePointer showConfig(); public native @Cast("bool") boolean hasPrimaryContext(@Cast("c10::DeviceIndex") byte device_index); + public native void setCurrentDevice(@Cast("c10::DeviceIndex") byte device); + + public native @Cast("c10::DeviceIndex") byte getCurrentDevice(); + + public native @Cast("c10::DeviceIndex") byte exchangeDevice(@Cast("c10::DeviceIndex") byte device); + + public native @Cast("c10::DeviceIndex") byte maybeExchangeDevice(@Cast("c10::DeviceIndex") byte device); + + public native @ByVal Stream getCurrentStream(@Cast("c10::DeviceIndex") byte device); + + public native @ByVal Stream getDefaultStream(@Cast("c10::DeviceIndex") byte device); + + public native void setCurrentStream(@Const @ByRef Stream stream); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MagicMethod.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MagicMethod.java index fa2c94a6d65..fb2d104e568 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MagicMethod.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MagicMethod.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossImpl.java index d3b8366f568..cb6efc1900f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ /** Creates a criterion that measures the loss given * inputs :math:{@code x1}, :math:{@code x2}, two 1D mini-batch {@code Tensors}, * and a label 1D mini-batch tensor :math:{@code y} (containing 1 or -1). - * See https://pytorch.org/docs/master/nn.html#torch.nn.MarginRankingLoss to + * See https://pytorch.org/docs/main/nn.html#torch.nn.MarginRankingLoss to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::MarginRankingLossOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossImplCloneable.java index 5210bd84cbb..b26c597d5d6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MarginRankingLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossOptions.java index 5d5652fe12c..654be08c8c4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MarginRankingLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MatchTypeReturn.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MatchTypeReturn.java index f7d25ee4b62..c15dd4b78a7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MatchTypeReturn.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MatchTypeReturn.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MatchedSchema.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MatchedSchema.java index 90a22bf0ee7..8709d2253c2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MatchedSchema.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MatchedSchema.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImpl.java index 7bf93d33438..2c52eae37b1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MaxPool1d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies maxpool over a 1-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.MaxPool1d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.MaxPool1d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::MaxPool1dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImplBase.java index e4afe51fb26..bb8d29328b4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImplCloneable.java index e17b2b98218..d59f858dacf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MaxPool1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dOptions.java index e493d559404..9fb7bf1347d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImpl.java index 5be87cfbcde..adee2b538c1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MaxPool2d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies maxpool over a 2-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.MaxPool2d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.MaxPool2d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::MaxPool2dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImplBase.java index 1d4de48238c..dca667faa53 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImplCloneable.java index ccb503c260b..42bc8609a1f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MaxPool2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dOptions.java index 2bf9af0cfa1..ee52d25a749 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImpl.java index eb14f2d9965..0638c6e03ea 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MaxPool3d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies maxpool over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.MaxPool3d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.MaxPool3d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::MaxPool3dOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImplBase.java index 6f188ab9d93..a1e87cb42cd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImplCloneable.java index bf4f52396fe..67133dbf027 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MaxPool3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dOptions.java index 9bdefbbf38f..f45583f9686 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxPool3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dFuncOptions.java index 6f684d8e905..daa8957ea20 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImpl.java index 44b27cb1227..625105e6071 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MaxUnpool1d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies maxunpool over a 1-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.MaxUnpool1d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.MaxUnpool1d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::MaxUnpool1dOptions} class to learn @@ -46,7 +47,7 @@ public class MaxUnpool1dImpl extends MaxUnpool1dImplBase { public native @ByVal Tensor forward( @Const @ByRef Tensor input, @Const @ByRef Tensor indices, - @Const @ByRef(nullValue = "c10::optional >(c10::nullopt)") LongVectorOptional output_size); + @Const @ByRef(nullValue = "std::optional >(c10::nullopt)") LongVectorOptional output_size); public native @ByVal Tensor forward( @Const @ByRef Tensor input, @Const @ByRef Tensor indices); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImplBase.java index 6f1072bc6ee..3ca94597664 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImplCloneable.java index 63128464f52..d6bb267cf64 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MaxUnpool1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dOptions.java index a0bfd805d5f..2f592034c60 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dFuncOptions.java index e1333038a13..edd3448204f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImpl.java index d3c968e285f..6e7bc8c6d60 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MaxUnpool2d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies maxunpool over a 2-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.MaxUnpool2d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.MaxUnpool2d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::MaxUnpool2dOptions} class to learn @@ -46,7 +47,7 @@ public class MaxUnpool2dImpl extends MaxUnpool2dImplBase { public native @ByVal Tensor forward( @Const @ByRef Tensor input, @Const @ByRef Tensor indices, - @Const @ByRef(nullValue = "c10::optional >(c10::nullopt)") LongVectorOptional output_size); + @Const @ByRef(nullValue = "std::optional >(c10::nullopt)") LongVectorOptional output_size); public native @ByVal Tensor forward( @Const @ByRef Tensor input, @Const @ByRef Tensor indices); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImplBase.java index f4250e755ec..a87fcef2469 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImplCloneable.java index 9c97ee57290..8d334a0831f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MaxUnpool2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dOptions.java index 9d1c3a449dc..559ed95f6ea 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dFuncOptions.java index 283d9b7c1b3..6b3b2bf30b5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImpl.java index 7ee872cbbeb..4aff59aa3d0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MaxUnpool3d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies maxunpool over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.MaxUnpool3d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.MaxUnpool3d to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::MaxUnpool3dOptions} class to learn @@ -46,7 +47,7 @@ public class MaxUnpool3dImpl extends MaxUnpool3dImplBase { public native @ByVal Tensor forward( @Const @ByRef Tensor input, @Const @ByRef Tensor indices, - @Const @ByRef(nullValue = "c10::optional >(c10::nullopt)") LongVectorOptional output_size); + @Const @ByRef(nullValue = "std::optional >(c10::nullopt)") LongVectorOptional output_size); public native @ByVal Tensor forward( @Const @ByRef Tensor input, @Const @ByRef Tensor indices); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImplBase.java index b0e6926b8ba..ef7e27893fa 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImplCloneable.java index 7fc5ee12c9c..e78566b64f6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MaxUnpool3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dOptions.java index f89ba13da46..f4e819d467b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MaxUnpool3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormatOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormatOptional.java index 43da2339a42..bd90602d4bf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormatOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormatOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class MemoryFormatOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormatType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormatType.java index a59c412cd51..a1aa2ee99fb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormatType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormatType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormattEnumerationType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormattEnumerationType.java index 37e28e4c732..4a84c19a5e5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormattEnumerationType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryFormattEnumerationType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryReportingInfoBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryReportingInfoBase.java index 4a93add074d..7c03eddad2d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryReportingInfoBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MemoryReportingInfoBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MetaBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MetaBase.java index 1acc4fc3b71..95c61374702 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MetaBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MetaBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Method.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Method.java index c2d5a49764b..e4576254fe5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Method.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Method.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -37,7 +38,7 @@ public class Method extends IMethod { public native @ByVal JitModule owner(); // the raw objectptr that owns this method, for when the method is owned by a // torchbind object. - public native @ByVal @Cast("torch::jit::ObjectPtr*") ObjPtr raw_owner(); + public native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj raw_owner(); public native void run(@ByRef IValueVector stack); public native @ByVal @Name("operator ()") IValue apply( @@ -50,11 +51,11 @@ public class Method extends IMethod { // interpreter that executes ops inline, one by one, on caller's thread. A // model can utilize async op, i.e. `fork`, to launch an asynchronous task // which will be launched on provided `taskLauncher`. - public native @ByVal FuturePtr run_async( + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future run_async( @ByVal IValueVector stack, @Cast("const torch::jit::Kwargs*") @ByRef(nullValue = "torch::jit::Kwargs()") StringIValueMap kwargs, @ByVal(nullValue = "torch::jit::TaskLauncher(at::launch)") @Cast("torch::jit::TaskLauncher*") Pointer taskLauncher); - public native @ByVal FuturePtr run_async( + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future run_async( @ByVal IValueVector stack); public native @SharedPtr("torch::jit::Graph") @ByVal Graph graph(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MethodOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MethodOptional.java index 64196796a93..b9754cf0680 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MethodOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MethodOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class MethodOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MethodValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MethodValue.java index fc464e4dd3f..683b189e3f3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MethodValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MethodValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MishImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MishImpl.java index b05a929a1df..3b548ff6622 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MishImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MishImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Mish ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies mish over a given input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Mish to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Mish to learn * about the exact behavior of this module. */ @Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class MishImpl extends MishImplCloneable { diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MishImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MishImplCloneable.java index 313d70c96a5..6f1070846b0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MishImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MishImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MishImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MobileCode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MobileCode.java index 3702b1f214d..532d5733dd7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MobileCode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MobileCode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Module.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Module.java index 57b4f4903ca..f9d3898c5e2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Module.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Module.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -115,7 +116,7 @@ public class Module extends Pointer { /// public native @SharedPtr("torch::nn::Module") @ByVal @Virtual(subclasses=false, method="clone") @Cast({"", "std::shared_ptr"}) @Const({false, false, true}) Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); /** Applies the {@code function} to the {@code Module} and recursively to every submodule. * The function must accept a {@code Module&}. diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleDictImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleDictImpl.java index 79bd8186836..3754010b657 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleDictImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleDictImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -131,7 +132,7 @@ public ModuleDictImpl( /** Special cloning function for {@code ModuleDict} because it does not use * {@code reset()}. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); /** {@code reset()} is empty for {@code ModuleDict}, since it does not have parameters of diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleDictImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleDictImplCloneable.java index 06c3324d321..b88e191a895 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleDictImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleDictImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -39,6 +40,6 @@ public class ModuleDictImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleInstanceInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleInstanceInfo.java index cf7908b5b6a..83cca4df932 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleInstanceInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleInstanceInfo.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleInstanceInfoOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleInstanceInfoOptional.java index 50d24ddc010..a38f82e9368 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleInstanceInfoOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleInstanceInfoOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ModuleInstanceInfoOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleListImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleListImpl.java index c3371957dc4..d954a5fdc89 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleListImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleListImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -77,7 +78,7 @@ public class ModuleListImpl extends ModuleListImplCloneable { /** Special cloning function for {@code ModuleList} because it does not use * {@code reset()}. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); /** {@code reset()} is empty for {@code ModuleList}, since it does not have parameters of diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleListImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleListImplCloneable.java index c97573f06eb..ea458ccd03d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleListImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ModuleListImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ModuleListImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ModulePolicy.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ModulePolicy.java index bba22126f0d..532c7abbc06 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ModulePolicy.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ModulePolicy.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossImpl.java index eac7e76e484..2aba992d0dd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,7 +26,7 @@ * hinge loss (margin-based loss) between input :math:{@code x} (a 2D mini-batch * {@code Tensor}) and output :math:{@code y} (which is a 2D {@code Tensor} of target class * indices). See - * https://pytorch.org/docs/master/nn.html#torch.nn.MultiLabelMarginLoss to + * https://pytorch.org/docs/main/nn.html#torch.nn.MultiLabelMarginLoss to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::MultiLabelMarginLossOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossImplCloneable.java index 322b864e5b3..a7d869b062b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MultiLabelMarginLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossOptions.java index 3dedd8a1525..4ea278681d2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelMarginLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossImpl.java index 5dc30d04fca..a3e41f01905 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ /** Creates a criterion that optimizes a multi-label one-versus-all * loss based on max-entropy, between input :math:{@code x} and target :math:{@code y} of * size :math:{@code (N, C)}. See - * https://pytorch.org/docs/master/nn.html#torch.nn.MultiLabelSoftMarginLoss to + * https://pytorch.org/docs/main/nn.html#torch.nn.MultiLabelSoftMarginLoss to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::MultiLabelSoftMarginLossOptions} class diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossImplCloneable.java index e71b9f2b738..5be143e85fc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MultiLabelSoftMarginLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossOptions.java index d58f92e2c41..8fda1728056 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiLabelSoftMarginLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossImpl.java index 1b4818b71ba..7dcaa8b8e46 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,7 +26,7 @@ * loss (margin-based loss) between input :math:{@code x} (a 2D mini-batch {@code Tensor}) * and output :math:{@code y} (which is a 1D tensor of target class indices, :math:{@code 0 * \leq y \leq \text{x.size}(1)-1}). See - * https://pytorch.org/docs/master/nn.html#torch.nn.MultiMarginLoss to learn + * https://pytorch.org/docs/main/nn.html#torch.nn.MultiMarginLoss to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::MultiMarginLossOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossImplCloneable.java index 7eb0e9097c6..fdcf9100d01 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MultiMarginLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossOptions.java index 454e69b02b7..f93462a5318 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiMarginLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionForwardFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionForwardFuncOptions.java index 3e978d73d58..aab9c623799 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionForwardFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionForwardFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionImpl.java index 3d5886a7b29..fd6c28457c1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ MultiheadAttention ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the MultiheadAttention function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.MultiheadAttention + * See https://pytorch.org/docs/main/nn.html#torch.nn.MultiheadAttention * to learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::MultiheadAttentionOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionImplCloneable.java index 625b9aa2001..40c6e621608 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class MultiheadAttentionImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionOptions.java index 56efe685281..9b6aac1c450 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MultiheadAttentionOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/MzZipReaderIterWrapper.java b/pytorch/src/gen/java/org/bytedeco/pytorch/MzZipReaderIterWrapper.java index 6a6c612829b..3a5e0071d39 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/MzZipReaderIterWrapper.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/MzZipReaderIterWrapper.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NCCLPreMulSumSupplement.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NCCLPreMulSumSupplement.java new file mode 100644 index 00000000000..12746b52d89 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NCCLPreMulSumSupplement.java @@ -0,0 +1,36 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +// Supplementary data specific to NCCL PREMUL_SUM +// The point of use in ProcessGroupNCCL knows how to unpack it. +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class NCCLPreMulSumSupplement extends _SupplementBase { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public NCCLPreMulSumSupplement(Pointer p) { super(p); } + + public native double double_factor(); public native NCCLPreMulSumSupplement double_factor(double setter); + public native @ByRef Tensor tensor_factor(); public native NCCLPreMulSumSupplement tensor_factor(Tensor setter); + public NCCLPreMulSumSupplement(double f) { super((Pointer)null); allocate(f); } + private native void allocate(double f); + public NCCLPreMulSumSupplement(@ByVal Tensor t) { super((Pointer)null); allocate(t); } + private native void allocate(@ByVal Tensor t); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossImpl.java index 94915e4af24..28f84772776 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ /** The negative log likelihood loss. It is useful to train a classification * problem with {@code C} classes. - * See https://pytorch.org/docs/master/nn.html#torch.nn.NLLLoss to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.NLLLoss to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::NLLLossOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossImplCloneable.java index b49ff7957b2..af3a5bbef63 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class NLLLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossOptions.java index 00874140452..4c24a79e1bc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NLLLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NameMangler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NameMangler.java index a5e995488b3..c7507ddfef5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NameMangler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NameMangler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedIValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedIValue.java index 6c711b910dd..c960645de6d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedIValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedIValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedJitModule.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedJitModule.java index 94cdeff700e..b7dae636612 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedJitModule.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedJitModule.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensor.java index fb7bc81c397..b74bd34283c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensor.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensorMeta.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensorMeta.java index e45972725d1..7415f637d88 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensorMeta.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensorMeta.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensorMetaInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensorMetaInterface.java index 99d4312ec0f..40318dec1b0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensorMetaInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTensorMetaInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace impl diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTupleConstructor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTupleConstructor.java index 2bbe3395c5e..58e185edf1d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTupleConstructor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedTupleConstructor.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedType.java index 5fe564485b6..1bafa2562c2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValue.java index 6d5f49773c2..06084717bcf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValueArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValueArrayRef.java index b404ed2d2dc..4df8eb688c5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValueArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValueArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValueOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValueOptional.java index 276fe501dcd..46929dfbf3f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValueOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NamedValueOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class NamedValueOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NamesMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NamesMode.java index d180cd48b44..10769c29305 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NamesMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NamesMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NativeResolver.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NativeResolver.java index 65eb0601da9..5a54774f0a6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NativeResolver.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NativeResolver.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NestedTensorImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NestedTensorImpl.java index 5ac5cdda20b..e7a3375ae50 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NestedTensorImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NestedTensorImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NoGradGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NoGradGuard.java index 3205548b2dc..46bc03659a2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NoGradGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NoGradGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NoNamesGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NoNamesGuard.java index 3fa62f8ceb4..ab28743a0df 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NoNamesGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NoNamesGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NoTF32Guard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NoTF32Guard.java index a11a1f33570..dff2c96ba9d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NoTF32Guard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NoTF32Guard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NoTarget.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NoTarget.java index 194f99fabed..f378502c856 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NoTarget.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NoTarget.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Node.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Node.java index 1e5d82a6afc..bc7d6767ee9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Node.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Node.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NodeCall.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NodeCall.java new file mode 100644 index 00000000000..ba43409927f --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NodeCall.java @@ -0,0 +1,41 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("torch::dynamo::autograd") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class NodeCall extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public NodeCall(Pointer p) { super(p); } + + public NodeCall(@Cast("uint32_t") int id_, @SharedPtr Node node_) { super((Pointer)null); allocate(id_, node_); } + private native void allocate(@Cast("uint32_t") int id_, @SharedPtr Node node_); + + public native void mark_output(int input_nr, int output_idx); + + public native @Cast("uint32_t") int id(); public native NodeCall id(int setter); + public native @SharedPtr Node node(); public native NodeCall node(Node setter); + public native @StdVector IntPair tensor_pre_hooks(); public native NodeCall tensor_pre_hooks(IntPair setter); + public native @StdVector IntPointer pre_hooks(); public native NodeCall pre_hooks(IntPointer setter); + public native @StdVector IntPointer post_hooks(); public native NodeCall post_hooks(IntPointer setter); + public native @StdVector IntPointer post_acc_grad_hooks(); public native NodeCall post_acc_grad_hooks(IntPointer setter); + public native @StdVector IntPair graph_output(); public native NodeCall graph_output(IntPair setter); + public native @Cast("bool") boolean needed(); public native NodeCall needed(boolean setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NodeCalls.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NodeCalls.java new file mode 100644 index 00000000000..4aff2913d13 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NodeCalls.java @@ -0,0 +1,41 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("torch::dynamo::autograd") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class NodeCalls extends NodeNodeCallMap { + static { Loader.load(); } + /** Default native constructor. */ + public NodeCalls() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public NodeCalls(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public NodeCalls(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public NodeCalls position(long position) { + return (NodeCalls)super.position(position); + } + @Override public NodeCalls getPointer(long i) { + return new NodeCalls((Pointer)this).offsetAddress(i); + } + + public native @ByRef NodeCall lookup(@SharedPtr Node function); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NodeNodeCallMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NodeNodeCallMap.java new file mode 100644 index 00000000000..b403921e6d9 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NodeNodeCallMap.java @@ -0,0 +1,47 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("std::unordered_map") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class NodeNodeCallMap extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public NodeNodeCallMap(Pointer p) { super(p); } + public NodeNodeCallMap() { allocate(); } + private native void allocate(); + + + public boolean empty() { return size() == 0; } + public native long size(); + + @Index(function = "at") public native @ByRef NodeCall get(Node i); + + public native @ByVal Iterator begin(); + public native @ByVal Iterator end(); + @NoOffset @Name("iterator") public static class Iterator extends Pointer { + public Iterator(Pointer p) { super(p); } + public Iterator() { } + + public native @Name("operator ++") @ByRef Iterator increment(); + public native @Name("operator ==") boolean equals(@ByRef Iterator it); + public native @Name("operator *().first") @MemberGetter @Const Node first(); + public native @Name("operator *().second") @MemberGetter @ByRef @Const NodeCall second(); + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NodeSet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NodeSet.java index 55e7c49b566..c682726a96f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NodeSet.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NodeSet.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NoneType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NoneType.java index c0937efcabd..df55a0046fa 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NoneType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NoneType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NoneTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NoneTypePtr.java index ae2e7b7df29..20cfa7084cc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NoneTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NoneTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Nonlinearity.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Nonlinearity.java index 3a2964bd9a6..e02b63bc0c0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Nonlinearity.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Nonlinearity.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NormalizeFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NormalizeFuncOptions.java index dcb2c86a67e..d0cb3fa5608 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NormalizeFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NormalizeFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NotImplementedError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NotImplementedError.java deleted file mode 100644 index d20fe75ede2..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NotImplementedError.java +++ /dev/null @@ -1,29 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -// Used in ATen for functionality that is not implemented. These turn into -// NotImplementedError when they cross to Python. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class NotImplementedError extends Error { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public NotImplementedError(Pointer p) { super(p); } - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NumberType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NumberType.java index f46065557be..63fadffbdf7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NumberType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NumberType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/NumberTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/NumberTypePtr.java index aa5f1d7c564..0523ffa03dc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/NumberTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/NumberTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Object.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Obj.java similarity index 62% rename from pytorch/src/gen/java/org/bytedeco/pytorch/Object.java rename to pytorch/src/gen/java/org/bytedeco/pytorch/Obj.java index 365858e9bcc..d94099887f1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Object.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Obj.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,16 +13,18 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // User-defined object. @Name("c10::ivalue::Object") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class Object extends Pointer { +public class Obj extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public Object(Pointer p) { super(p); } + public Obj(Pointer p) { super(p); } // In general, class types hold a shared_ptr to its owning CompilationUnit, // so that its type and methods do not get deallocated while the class exists. @@ -31,21 +32,21 @@ public class Object extends Pointer { // inserting a constant object into a Graph would create a reference cycle if // that constant object held a shared_ptr to its CU. For these objects we // instatiate them with non-owning references to its CU - public Object(@ByVal WeakOrStrongTypePtr type, @Cast("size_t") long numSlots) { super((Pointer)null); allocate(type, numSlots); } - private native void allocate(@ByVal WeakOrStrongTypePtr type, @Cast("size_t") long numSlots); + public Obj(@ByVal WeakOrStrongTypePtr type, @Cast("size_t") long numSlots) { super((Pointer)null); allocate(type, numSlots); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(@ByVal WeakOrStrongTypePtr type, @Cast("size_t") long numSlots); - public Object(@ByVal StrongTypePtr type, @Cast("size_t") long numSlots) { super((Pointer)null); allocate(type, numSlots); } - private native void allocate(@ByVal StrongTypePtr type, @Cast("size_t") long numSlots); + public Obj(@ByVal StrongTypePtr type, @Cast("size_t") long numSlots) { super((Pointer)null); allocate(type, numSlots); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(@ByVal StrongTypePtr type, @Cast("size_t") long numSlots); - public static native @ByVal ObjPtr create( + public static native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj create( @ByVal WeakOrStrongTypePtr type, @Cast("size_t") long numSlots); - public static native @ByVal ObjPtr create( + public static native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj create( @ByVal StrongTypePtr type, @Cast("size_t") long numSlots); - public static native @ByVal ObjPtr create(@SharedPtr("c10::ClassType") @ByVal ClassType classType, @Cast("size_t") long numSlots); + public static native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj create(@SharedPtr("c10::ClassType") @ByVal ClassType classType, @Cast("size_t") long numSlots); /** * Slot API. @@ -91,23 +92,23 @@ public class Object extends Pointer { public native @Const @ByRef IValueVector slots(); public native @SharedPtr("c10::ClassType") @ByVal ClassType type(); - public native @SharedPtr CompilationUnit compilation_unit(); + public native @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit compilation_unit(); - public native @ByVal ObjPtr copy_to_weak_compilation_ref(); + public native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj copy_to_weak_compilation_ref(); public native void unsafe_make_weak_compilation_ref(); - public native @ByVal ObjPtr copy(); + public native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj copy(); - public native @ByVal ObjPtr deepcopy( - @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); - public native @ByVal ObjPtr deepcopy(); + public native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj deepcopy( + @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); + public native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj deepcopy(); - public native @ByVal ObjPtr deepcopy( - @ByRef HashAliasedIValueMap memo, - @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); - public native @ByVal ObjPtr deepcopy( - @ByRef HashAliasedIValueMap memo); + public native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj deepcopy( + @ByRef HashIdentityIValueMap memo, + @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); + public native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj deepcopy( + @ByRef HashIdentityIValueMap memo); public native @Cast("bool") boolean is_weak_compilation_ref(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ObjPtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ObjPtr.java deleted file mode 100644 index 5e99b8a7a68..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ObjPtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class ObjPtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public ObjPtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public ObjPtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public ObjPtr position(long position) { - return (ObjPtr)super.position(position); - } - @Override public ObjPtr getPointer(long i) { - return new ObjPtr((Pointer)this).offsetAddress(i); - } - - - public ObjPtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public ObjPtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public ObjPtr(Object target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(Object target, @ByVal DontIncreaseRefcount arg1); - - - - public ObjPtr(@ByRef(true) ObjPtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) ObjPtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) ObjPtr put(@ByRef(true) ObjPtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) Object get(); - - public native @ByRef @Name("operator *") @NoException(true) Object multiply(); - - public native @Name("operator ->") @NoException(true) Object access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef ObjPtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) Object release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal ObjPtr reclaim(Object owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal ObjPtr reclaim_copy(Object owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal ObjPtr unsafe_steal_from_new(Object raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal ObjPtr unsafe_adapt_non_heap_allocated( - Object raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal ObjPtr unsafe_reclaim_from_nonowning(Object raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OnnxfiBackendSystemError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OnnxfiBackendSystemError.java deleted file mode 100644 index 0f850c376dd..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OnnxfiBackendSystemError.java +++ /dev/null @@ -1,29 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -// Used in Onnxifi backend lowering. These turn into -// ExitException when they cross to Python. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class OnnxfiBackendSystemError extends Error { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public OnnxfiBackendSystemError(Pointer p) { super(p); } - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OpRegistrationListener.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OpRegistrationListener.java index f5b427c3776..d2cef923ab4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OpRegistrationListener.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OpRegistrationListener.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OperandInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OperandInfo.java index 2e02db50df5..12809e0a28b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OperandInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OperandInfo.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace internal @@ -70,6 +71,17 @@ public class OperandInfo extends Pointer { public native @Cast("bool") boolean is_output(); public native OperandInfo is_output(boolean setter); + // will_resize is only for output tensor. + // 1) Functional call(like torch.add(self, other)): output tensor is + // undefined, and pytorch creates a new tensor by using common shape + // and computed stride in TensorIterator; + // 2) Inplace call(like torch.add_(self, other)): output tensor is same + // with input tensor, and can't to modify tensor's size and stride; + // 3) Op call with output(like torch.add(self, other, out = output)): + // output tensor is defined, but tensor shape maybe different with common + // shape. If tensor shape is not same with common shape, this output + // tensor will be resized by using common shape and computed stride in + // TensorIterator. Otherwise can't modify tensor's size and stride. public native @Cast("bool") boolean will_resize(); public native OperandInfo will_resize(boolean setter); public native @Cast("bool") boolean is_read_write(); public native OperandInfo is_read_write(boolean setter); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Operation.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Operation.java index 7e14fdf7ee2..cc656b79f02 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Operation.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Operation.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Operator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Operator.java index d663dc8eb35..de65bc8d1cb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Operator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Operator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorHandle.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorHandle.java index caf4c66c3fd..4e982e07e58 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorHandle.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorHandle.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -47,6 +48,9 @@ public class OperatorHandle extends Pointer { public native @Cast("bool") boolean hasKernelForDispatchKey(DispatchKey k); public native @Cast("bool") boolean hasKernelForDispatchKey(@Cast("c10::DispatchKey") short k); + public native @Cast("bool") boolean isKernelFallthroughKernel(DispatchKey k); + public native @Cast("bool") boolean isKernelFallthroughKernel(@Cast("c10::DispatchKey") short k); + public native @Cast("bool") boolean hasKernelForAnyDispatchKey(@ByVal DispatchKeySet k); public native @Cast("bool") boolean hasComputedKernelForDispatchKey(DispatchKey k); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorHandleOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorHandleOptional.java index 862b0ef31f8..33f39a1229e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorHandleOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorHandleOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class OperatorHandleOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorKernel.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorKernel.java index 4b37ec7c434..cfd837b88d3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorKernel.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorKernel.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorName.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorName.java index 02072e0385d..2f4266f1195 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorName.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorName.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorNameOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorNameOptional.java index 0c48f85a086..1585e6a8b8a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorNameOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorNameOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class OperatorNameOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorSet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorSet.java index f4429fe1ad0..c49d704ace2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorSet.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorSet.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorVector.java index fa8ce574213..e799c0e57de 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OperatorVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Optimizer.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Optimizer.java index a5465d88544..6e973b702b7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Optimizer.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Optimizer.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdagradOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdagradOptions.java index ad3e66db5e4..7b90c2386f5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdagradOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdagradOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdagradParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdagradParamState.java index aa7d7d64b3c..bee58c474a1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdagradParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdagradParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamOptions.java index deb1217cc45..5e2a7011ce3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamParamState.java index b10d2c9ee42..feb541a92a0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamWOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamWOptions.java index a3d5dbd23a4..e2557b4ade2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamWOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamWOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamWParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamWParamState.java index 43c575c2015..94bc0030c39 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamWParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableAdamWParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableLBFGSOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableLBFGSOptions.java index da7db452ea4..d2fe61ff44c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableLBFGSOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableLBFGSOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableLBFGSParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableLBFGSParamState.java index e24f54f598d..571250b316a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableLBFGSParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableLBFGSParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableRMSpropOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableRMSpropOptions.java index 1c225e7da78..feb5bc6d276 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableRMSpropOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableRMSpropOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableRMSpropParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableRMSpropParamState.java index 6bead639efa..33f6bda5343 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableRMSpropParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableRMSpropParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableSGDOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableSGDOptions.java index 1a3eaf99a7b..654a0d35718 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableSGDOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableSGDOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableSGDParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableSGDParamState.java index 9e1c2c58814..46435926338 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableSGDParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerCloneableSGDParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerOptions.java index 13f688f1b71..993b80eab2a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamGroup.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamGroup.java index 2d695459a30..fc5c0095262 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamGroup.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamGroup.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamGroupVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamGroupVector.java index a195ef91e54..fffef85af2b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamGroupVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamGroupVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamState.java index 6d1c1350059..327cbbb6dd0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptimizerParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptionalDeviceGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptionalDeviceGuard.java index fc259955adb..8c2c2a45155 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptionalDeviceGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptionalDeviceGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptionalStreamGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptionalStreamGuard.java new file mode 100644 index 00000000000..70c99f2e9b5 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptionalStreamGuard.java @@ -0,0 +1,86 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +/** + * An OptionalStreamGuard is an RAII class that sets a device to some value on + * initialization, and resets the device to its original value on destruction. + * See OptionalDeviceGuard for more guidance on how to use this class. + */ +@Namespace("c10") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class OptionalStreamGuard extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public OptionalStreamGuard(Pointer p) { super(p); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public OptionalStreamGuard(long size) { super((Pointer)null); allocateArray(size); } + private native void allocateArray(long size); + @Override public OptionalStreamGuard position(long position) { + return (OptionalStreamGuard)super.position(position); + } + @Override public OptionalStreamGuard getPointer(long i) { + return new OptionalStreamGuard((Pointer)this).offsetAddress(i); + } + + /** Create an uninitialized guard. */ + public OptionalStreamGuard() { super((Pointer)null); allocate(); } + private native void allocate(); + + /** Set the current device to the device associated with the passed stream, + * and set the current stream on that device to the passed stream. */ + public OptionalStreamGuard(@ByVal Stream stream) { super((Pointer)null); allocate(stream); } + private native void allocate(@ByVal Stream stream); + + /** Set the current device to the device associated with the passed stream, + * and set the current stream on that device to the passed stream, + * if the passed stream is not nullopt. */ + public OptionalStreamGuard(@ByVal StreamOptional stream_opt) { super((Pointer)null); allocate(stream_opt); } + private native void allocate(@ByVal StreamOptional stream_opt); + + /** Copy is disallowed */ + + + + // See Note [Move construction for RAII guards is tricky] + + + // See Note [Move assignment for RAII guards is tricky] + + + /** Resets the currently set stream to the original stream and + * the currently set device to the original device. Then, + * set the current device to the device associated with the passed stream, + * and set the current stream on that device to the passed stream. + * Initializes the guard if it was not previously initialized. */ + public native void reset_stream(@ByVal Stream stream); + + /** Returns the stream that was set at the time the guard was most recently + * initialized, or nullopt if the guard is uninitialized. */ + public native @ByVal StreamOptional original_stream(); + + /** Returns the most recent stream that was set using this stream guard, + * either from construction, or via reset_stream, if the guard is + * initialized, or nullopt if the guard is uninitialized. */ + public native @ByVal StreamOptional current_stream(); + + /** Restore the original device and stream, resetting this guard to + * uninitialized state. */ + public native void reset(); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OptionalType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OptionalType.java index 82a57d2e257..b5778f168a7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OptionalType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OptionalType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OutputArchive.java b/pytorch/src/gen/java/org/bytedeco/pytorch/OutputArchive.java index 001aab077c7..95d1757221d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/OutputArchive.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/OutputArchive.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace torch @@ -32,8 +33,8 @@ public class OutputArchive extends Pointer { return new OutputArchive((Pointer)this).offsetAddress(i); } - public OutputArchive(@SharedPtr CompilationUnit cu) { super((Pointer)null); allocate(cu); } - private native void allocate(@SharedPtr CompilationUnit cu); + public OutputArchive(@SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu) { super((Pointer)null); allocate(cu); } + private native void allocate(@SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu); public OutputArchive() { super((Pointer)null); allocate(); } private native void allocate(); @@ -46,7 +47,7 @@ public class OutputArchive extends Pointer { - public native @SharedPtr CompilationUnit compilation_unit(); + public native @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit compilation_unit(); /** Writes an {@code IValue} to the {@code OutputArchive}. */ public native void write(@StdString BytePointer key, @Const @ByRef IValue ivalue); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PODLocalDispatchKeySet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PODLocalDispatchKeySet.java index dff1e8e5f41..4651763b4fb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PODLocalDispatchKeySet.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PODLocalDispatchKeySet.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUImpl.java index 03028fde6b1..64e89fdc576 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PReLU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the PReLU function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.PReLU to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.PReLU to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::PReLUOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUImplCloneable.java index 1fe1edfa502..413fc341488 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class PReLUImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUOptions.java index 44b2e6d028c..a013cf52561 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PReLUOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PackedSequence.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PackedSequence.java index 95fe12a8584..d59f2ddcd34 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PackedSequence.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PackedSequence.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PadFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PadFuncOptions.java index 53348dc7aa3..7490348186e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PadFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PadFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PaddingMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PaddingMode.java index bb8dd0e1dd1..d704520ee3e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PaddingMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PaddingMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceImpl.java index 0749d9b5525..aa39db59217 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ /** Returns the batchwise pairwise distance between vectors :math:{@code v_1}, * :math:{@code v_2} using the p-norm. - * See https://pytorch.org/docs/master/nn.html#torch.nn.PairwiseDistance to + * See https://pytorch.org/docs/main/nn.html#torch.nn.PairwiseDistance to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::PairwiseDistanceOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceImplCloneable.java index 574eb297a8d..3d4e899ffd4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class PairwiseDistanceImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceOptions.java index 1b393c4f8e7..a62493d81af 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PairwiseDistanceOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Param.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Param.java index ea980cf366f..98a35fb7ab7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Param.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Param.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Param extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Param(Pointer p) { super(p); } - public Param(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Param(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public static native @ByVal Param create( @Const @ByRef SourceRange range, @Const @ByRef Ident ident, diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ParamList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ParamList.java index fc71decfceb..42ed7d7326e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ParamList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ParamList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class ParamList extends TreeView { public ParamList(Pointer p) { super(p); } - public ParamList(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public ParamList(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal @Cast("torch::jit::List::iterator*") ParamListIterator begin(); public native @ByVal @Cast("torch::jit::List::iterator*") ParamListIterator end(); public native @Cast("bool") boolean empty(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ParamListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ParamListIterator.java index 41e53b57fc0..326bd1dd6aa 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ParamListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ParamListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class ParamListIterator extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public ParamListIterator(Pointer p) { super(p); } - public ParamListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it) { super((Pointer)null); allocate(it); } - private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it); + public ParamListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it) { super((Pointer)null); allocate(it); } + private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it); public native @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef ParamListIterator rhs); public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef ParamListIterator rhs); public native @ByVal @Name("operator *") Param multiply(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterDictImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterDictImpl.java index c0a10faa889..e8af905c0eb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterDictImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterDictImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterDictImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterDictImplCloneable.java index 933a91bec93..12ef18c59c1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterDictImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterDictImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ParameterDictImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterListImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterListImpl.java index 8db377e6ffa..4b0a927d464 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterListImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterListImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterListImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterListImplCloneable.java index b60913f11ad..736b35f3946 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterListImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterListImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ParameterListImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterPolicy.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterPolicy.java index e6d40679f47..b309f6665f0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterPolicy.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ParameterPolicy.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Pass.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Pass.java index 9a9472aa0b0..5e1f68c2a2b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Pass.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Pass.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ public class Pass extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Pass(Pointer p) { super(p); } - public Pass(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Pass(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public static native @ByVal Pass create(@Const @ByRef SourceRange range); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Pickler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Pickler.java index 0bbb1c43dc9..cada66973ad 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Pickler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Pickler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleImpl.java index 15559a0c629..6831fdd373a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ /** Rearranges elements in a tensor of shape :math:{@code (*, C \times r^2, H, W)} * to a tensor of shape :math:{@code (*, C, H \times r, W \times r)}, where r is an * upscale factor. See - * https://pytorch.org/docs/master/nn.html#torch.nn.PixelShuffle to learn about + * https://pytorch.org/docs/main/nn.html#torch.nn.PixelShuffle to learn about * the exact behavior of this module. * * See the documentation for {@code torch::nn::PixelShuffleOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleImplCloneable.java index a92efd00b13..845136d230d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class PixelShuffleImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleOptions.java index c13fc643205..e0cfade43c9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelShuffleOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleImpl.java index 5872807b44d..8db424b09e4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -23,7 +24,7 @@ /** Reverses the PixelShuffle operation by rearranging elements in a tensor of * shape :math:{@code (*, C, H \times r, W \times r)} to a tensor of shape :math:{@code (*, * C \times r^2, H, W)}, where r is a downscale factor. See - * https://pytorch.org/docs/master/nn.html#torch.nn.PixelUnshuffle to learn + * https://pytorch.org/docs/main/nn.html#torch.nn.PixelUnshuffle to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::PixelUnshuffleOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleImplCloneable.java index 4cbbb13342c..96468042436 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class PixelUnshuffleImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleOptions.java index 57313a33b26..0304e438185 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PixelUnshuffleOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PlacementDeleteContext.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PlacementDeleteContext.java index 0ce81f3f9ff..34302ad45e3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PlacementDeleteContext.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PlacementDeleteContext.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PointerPair.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PointerPair.java index 56e96e03043..940f83bdaf1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PointerPair.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PointerPair.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PointerPairOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PointerPairOptional.java index 09e453e1574..e55489b8736 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PointerPairOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PointerPairOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class PointerPairOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossImpl.java index 984c1441432..b95adb73bc2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Negative log likelihood loss with Poisson distribution of target. - * See https://pytorch.org/docs/master/nn.html#torch.nn.PoissonNLLLoss to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.PoissonNLLLoss to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::PoissonNLLLossOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossImplCloneable.java index 30d27fcf1dc..15e8c4e0c8c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class PoissonNLLLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossOptions.java index 0731f34639f..4d5727403c8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PoissonNLLLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PostAccumulateGradHook.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PostAccumulateGradHook.java index 2e4e9de3363..69cfe774f7b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PostAccumulateGradHook.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PostAccumulateGradHook.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PrefixStore.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PrefixStore.java new file mode 100644 index 00000000000..cbcb83a43a0 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PrefixStore.java @@ -0,0 +1,83 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class PrefixStore extends Store { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public PrefixStore(Pointer p) { super(p); } + + public PrefixStore(@StdString BytePointer prefix, @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store) { super((Pointer)null); allocate(prefix, store); } + private native void allocate(@StdString BytePointer prefix, @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store); + public PrefixStore(@StdString String prefix, @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store) { super((Pointer)null); allocate(prefix, store); } + private native void allocate(@StdString String prefix, @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store); + public native void set(@StdString BytePointer key, @Cast("const std::vector*") @ByRef ByteVector value); + public native void set(@StdString String key, @Cast("const std::vector*") @ByRef ByteVector value); + public native @ByVal @Cast("std::vector*") ByteVector compareSet( + @StdString BytePointer key, + @Cast("const std::vector*") @ByRef ByteVector expectedValue, + @Cast("const std::vector*") @ByRef ByteVector desiredValue); + public native @ByVal @Cast("std::vector*") ByteVector compareSet( + @StdString String key, + @Cast("const std::vector*") @ByRef ByteVector expectedValue, + @Cast("const std::vector*") @ByRef ByteVector desiredValue); + + public native @ByVal @Cast("std::vector*") ByteVector get(@StdString BytePointer key); + public native @ByVal @Cast("std::vector*") ByteVector get(@StdString String key); + + public native @Cast("int64_t") long add(@StdString BytePointer key, @Cast("int64_t") long value); + public native @Cast("int64_t") long add(@StdString String key, @Cast("int64_t") long value); + + public native @Cast("bool") boolean deleteKey(@StdString BytePointer key); + public native @Cast("bool") boolean deleteKey(@StdString String key); + + public native @Cast("int64_t") long getNumKeys(); + + public native @Cast("bool") boolean check(@Const @ByRef StringVector keys); + + public native @Name("wait") void _wait(@Const @ByRef StringVector keys); + + public native @Name("wait") void _wait( + @Const @ByRef StringVector keys, + @Const @ByRef Milliseconds timeout); + + public native @Const @ByRef @NoException(true) Milliseconds getTimeout(); + + public native void setTimeout(@Const @ByRef Milliseconds timeout); + + public native void append(@StdString BytePointer key, @Cast("const std::vector*") @ByRef ByteVector value); + public native void append(@StdString String key, @Cast("const std::vector*") @ByRef ByteVector value); + + public native @Cast("std::vector*") @StdVector ByteVector multiGet( + @Const @ByRef StringVector keys); + + public native void multiSet( + @Const @ByRef StringVector keys, + @Cast("std::vector*") @StdVector ByteVector values); + + // Returns true if this store support append, multiGet and multiSet + public native @Cast("bool") boolean hasExtendedApi(); + + public native @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store getUnderlyingStore(); + + // Recursively to fetch the store before layers of wrapping with PrefixStore. + public native @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store getUnderlyingNonPrefixStore(); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PrintValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PrintValue.java index bee670314e7..0562b450908 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PrintValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PrintValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PrivateUse1HooksArgs.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PrivateUse1HooksArgs.java index 43fba130eee..3a1c3b3ee1d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PrivateUse1HooksArgs.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PrivateUse1HooksArgs.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PrivateUse1HooksInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PrivateUse1HooksInterface.java index a5c4e290442..1ddeed881cd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PrivateUse1HooksInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PrivateUse1HooksInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ProcessGroup.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ProcessGroup.java new file mode 100644 index 00000000000..14699eddba4 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ProcessGroup.java @@ -0,0 +1,353 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +// ProcessGroup is a base class that captures collective and point to +// point communication in a fixed set of processes. +// +// The functions specified in the class below describe the API alone; +// implementations are provided in subclasses. +// +// Every function that performs I/O is executed asynchronously by a +// thread pool owned by the ProcessGroup (by default). They return an +// object that can be used to wait for completion or error. +// +// The ProcessGroup can instantiate subgroups with fewer or an equal +// number of members. Implementations must take care that multiple +// process groups can be used in parallel and synchronize accordingly. +// +// The ProcessGroup assumes a fixed set of processes. If the set +// changes, existing instances must be destructed and instantiation +// and initialization must start from scratch. For members of the +// process group to find each other (referred to as rendezvous from +// hereon) +// +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class ProcessGroup extends CustomClassHolder { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public ProcessGroup(Pointer p) { super(p); } + + // ProcessGroup Options is a base struct that defines the basic options + // when constructing a ProcessGroup. Each ProcessGroup subclass should + // extend this struct and define its options if it wants to provide more + // config options (beyond basic ones defined here) to end user. + @NoOffset public static class Options extends CustomClassHolder { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public Options(Pointer p) { super(p); } + + public Options( + @StdString BytePointer backend, + @ByVal(nullValue = "std::chrono::milliseconds(kProcessGroupDefaultTimeout)") Milliseconds timeout) { super((Pointer)null); allocate(backend, timeout); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( + @StdString BytePointer backend, + @ByVal(nullValue = "std::chrono::milliseconds(kProcessGroupDefaultTimeout)") Milliseconds timeout); + public Options( + @StdString BytePointer backend) { super((Pointer)null); allocate(backend); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( + @StdString BytePointer backend); + public Options( + @StdString String backend, + @ByVal(nullValue = "std::chrono::milliseconds(kProcessGroupDefaultTimeout)") Milliseconds timeout) { super((Pointer)null); allocate(backend, timeout); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( + @StdString String backend, + @ByVal(nullValue = "std::chrono::milliseconds(kProcessGroupDefaultTimeout)") Milliseconds timeout); + public Options( + @StdString String backend) { super((Pointer)null); allocate(backend); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( + @StdString String backend); + + public native @ByRef Milliseconds timeout(); public native Options timeout(Milliseconds setter); + + // backend name + // NOLINTNEXTLINE(cppcoreguidelines-avoid-const-or-ref-data-members) + @MemberGetter public native @StdString BytePointer backend(); + } + + public enum BackendType { + UNDEFINED((byte)(0)), + GLOO((byte)(1)), + NCCL((byte)(2)), + UCC((byte)(3)), + MPI((byte)(4)), + CUSTOM((byte)(5)); + + public final byte value; + private BackendType(byte v) { this.value = v; } + private BackendType(BackendType e) { this.value = e.value; } + public BackendType intern() { for (BackendType e : values()) if (e.value == value) return e; return this; } + @Override public String toString() { return intern().name(); } + } + + // Not used, set for backwards compatibility and only used for TypeDef in + // Ops.cpp + public ProcessGroup(int rank, int size) { super((Pointer)null); allocate(rank, size); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(int rank, int size); + + public ProcessGroup( + @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store, + int rank, + int size, + @IntrusivePtr("c10d::ProcessGroup::Options") @Cast({"", "c10::intrusive_ptr&"}) Options options) { super((Pointer)null); allocate(store, rank, size, options); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( + @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store, + int rank, + int size, + @IntrusivePtr("c10d::ProcessGroup::Options") @Cast({"", "c10::intrusive_ptr&"}) Options options); + + public native int getRank(); + + public native int getSize(); + + // Returns an unique opaque ID of this process group object. + public native @Cast("int64_t") long getID(); + + // Returns an unique opaque ID of a backend for the specific backend type + // that can correlate with this process group's collectives. + public native @Cast("int64_t") long getBackendID(BackendType backend_type); + public native @Cast("int64_t") long getBackendID(@Cast("c10d::ProcessGroup::BackendType") byte backend_type); + + public native @StdString BytePointer getBackendName(); + + public native BackendType getBackendType(); + + public native void startCoalescing(DeviceType deviceType); + public native void startCoalescing(@Cast("c10::DeviceType") byte deviceType); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work endCoalescing(DeviceType deviceType); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work endCoalescing(@Cast("c10::DeviceType") byte deviceType); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work broadcast( + @ByRef TensorVector tensors, + @Const @ByRef(nullValue = "c10d::BroadcastOptions()") BroadcastOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work broadcast( + @ByRef TensorVector tensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce( + @ByRef TensorVector tensors, + @Const @ByRef(nullValue = "c10d::AllreduceOptions()") AllreduceOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce( + @ByRef TensorVector tensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce_coalesced( + @ByRef TensorVector tensors, + @Const @ByRef(nullValue = "c10d::AllreduceCoalescedOptions()") AllreduceCoalescedOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce_coalesced( + @ByRef TensorVector tensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce( + @ByRef TensorVector tensors, + @Const @ByRef(nullValue = "c10d::ReduceOptions()") ReduceOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce( + @ByRef TensorVector tensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather( + @StdVector TensorVector outputTensors, + @ByRef TensorVector inputTensors, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather( + @StdVector TensorVector outputTensors, + @ByRef TensorVector inputTensors); + + // Gathers a single tensor inputBuffer into a single buffer outputBuffer that + // is interpreted as a contiguous collection of size inputBuffer * WORLD_SIZE. + // For implementers of ProcessGroup API and advanced users only. + // Note: this function will be deprecated in near future. + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _allgather_base( + @ByRef Tensor outputBuffer, + @ByRef Tensor inputBuffer, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _allgather_base( + @ByRef Tensor outputBuffer, + @ByRef Tensor inputBuffer); + + // This function is deprecated and will be moved out of ProcessGroup to comms: + // * do not add dependencies on this function, + // * do not implement it in your ProcessGroup, implement _allgather_base + // instead. + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_coalesced( + @StdVector TensorVector outputTensorLists, + @ByRef TensorVector inputTensors, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_coalesced( + @StdVector TensorVector outputTensorLists, + @ByRef TensorVector inputTensors); + + // This function is a coalesced version of `allgather_into_tensor` (currently + // still named as `_allgather_base`). Each tensor in the vector corresponds to + // an input/output of one `allgather_into_tensor` operation. + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_into_tensor_coalesced( + @ByRef TensorVector outputTensors, + @ByRef TensorVector inputTensors, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_into_tensor_coalesced( + @ByRef TensorVector outputTensors, + @ByRef TensorVector inputTensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work gather( + @StdVector TensorVector outputTensors, + @ByRef TensorVector inputTensors, + @Const @ByRef(nullValue = "c10d::GatherOptions()") GatherOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work gather( + @StdVector TensorVector outputTensors, + @ByRef TensorVector inputTensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work scatter( + @ByRef TensorVector outputTensors, + @StdVector TensorVector inputTensors, + @Const @ByRef(nullValue = "c10d::ScatterOptions()") ScatterOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work scatter( + @ByRef TensorVector outputTensors, + @StdVector TensorVector inputTensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter( + @ByRef TensorVector outputTensors, + @StdVector TensorVector inputTensors, + @Const @ByRef(nullValue = "c10d::ReduceScatterOptions()") ReduceScatterOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter( + @ByRef TensorVector outputTensors, + @StdVector TensorVector inputTensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _reduce_scatter_base( + @ByRef Tensor outputBuffer, + @ByRef Tensor inputBuffer, + @Const @ByRef(nullValue = "c10d::ReduceScatterOptions()") ReduceScatterOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _reduce_scatter_base( + @ByRef Tensor outputBuffer, + @ByRef Tensor inputBuffer); + + // This function is a coalesced version of `reduce_scatter_tensor` (currently + // still named as `_reduce_scatter_base`). Each tensor in the vector + // corresponds to an input/output of one `reduce_scatter_tensor` operation. + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter_tensor_coalesced( + @ByRef TensorVector outputTensors, + @ByRef TensorVector inputTensors, + @Const @ByRef(nullValue = "c10d::ReduceScatterOptions()") ReduceScatterOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter_tensor_coalesced( + @ByRef TensorVector outputTensors, + @ByRef TensorVector inputTensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work alltoall_base( + @ByRef Tensor outputBuffer, + @ByRef Tensor inputBuffer, + @Cast("std::vector*") @ByRef LongVector outputSplitSizes, + @Cast("std::vector*") @ByRef LongVector inputSplitSizes, + @Const @ByRef(nullValue = "c10d::AllToAllOptions()") AllToAllOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work alltoall_base( + @ByRef Tensor outputBuffer, + @ByRef Tensor inputBuffer, + @Cast("std::vector*") @ByRef LongVector outputSplitSizes, + @Cast("std::vector*") @ByRef LongVector inputSplitSizes); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work alltoall( + @ByRef TensorVector outputTensors, + @ByRef TensorVector inputTensors, + @Const @ByRef(nullValue = "c10d::AllToAllOptions()") AllToAllOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work alltoall( + @ByRef TensorVector outputTensors, + @ByRef TensorVector inputTensors); + + public native void monitoredBarrier( + @Const @ByRef BarrierOptions opts, + @Cast("bool") boolean wait_all_ranks/*=false*/); + public native void monitoredBarrier( + @Const @ByRef BarrierOptions opts); + + // Agrees on an initial sequence number for the whole group by having rank 0 + // create it and broadcast it to other ranks using the store. Only implemented + // for GLOO and NCCL backends currently. + public native void setSequenceNumberForGroup(); + + // Retrieves the current sequence number for the whole group, which should be + // in sync. If the returned number is not consistent across the group, it + // may indicate that there is some sort of collective desynchronization. + public native @Cast("uint64_t") long getSequenceNumberForGroup(); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work send( + @ByRef TensorVector tensors, + int dstRank, + int tag); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work recv( + @ByRef TensorVector tensors, + int srcRank, + int tag); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work recvAnysource( + @ByRef TensorVector tensors, + int tag); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work barrier( + @Const @ByRef(nullValue = "c10d::BarrierOptions()") BarrierOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work barrier(); + + public native @IntrusivePtr("c10d::ProcessGroup::Options") @Cast({"", "c10::intrusive_ptr&"}) Options getOptions(); + + public native @Cast("bool") boolean hasBackends(); + + public native void setBackend( + DeviceType deviceType, + BackendType backendType, + @Const @ByRef DistributedBackendOptional backend); + public native void setBackend( + @Cast("c10::DeviceType") byte deviceType, + @Cast("c10d::ProcessGroup::BackendType") byte backendType, + @Const @ByRef DistributedBackendOptional backend); + + public native @IntrusivePtr("c10d::Backend") @Cast({"", "c10::intrusive_ptr&"}) DistributedBackend getDefaultBackend(); + + public native @IntrusivePtr("c10d::Backend") @Cast({"", "c10::intrusive_ptr&"}) DistributedBackend getBackend(DeviceType deviceType); + public native @IntrusivePtr("c10d::Backend") @Cast({"", "c10::intrusive_ptr&"}) DistributedBackend getBackend(@Cast("c10::DeviceType") byte deviceType); + + public native @IntrusivePtr("c10d::Backend") @Cast({"", "c10::intrusive_ptr&"}) DistributedBackend getBackend(BackendType backendType); + + // Return device types supported by this ProcessGroup. + // Note: the return type is `Device` rather than `DeviceType` for the purpose + // of easy comparison at Python level. The `Device` will have default index + // (-1). + public native @StdVector Device getDeviceTypes(); + + public native void registerOnCompletionHook( + @ByRef(true) WorkInfoConsumer hook); + + public native void waitForPendingWorks(); + + public native @Cast("bool") boolean hasHooks(); + + public native @StdString BytePointer getGroupName(); + public native void setGroupName(@StdString BytePointer name); + public native void setGroupName(@StdString String name); + public native @StdString BytePointer getGroupDesc(); + public native void setGroupDesc(@StdString BytePointer name); + public native void setGroupDesc(@StdString String name); + public native void enableCollectivesTiming(); + + public native void release_resources(); + + // ProcessGroups optionally can be "bound" to a specific device. + // Currently this is only for nccl and allows for some opt-in + // optimizations such as automatic use of ncclCommSplit. The device + // is specified in `init_process_group` and eventually makes it + // here and then down into the actual backend instances. + public native @ByVal DeviceOptional getBoundDeviceId(); + + public native void setBoundDeviceId(@ByVal DeviceOptional device); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DistNetworkError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ProcessGroupCppCommHookInterface.java similarity index 54% rename from pytorch/src/gen/java/org/bytedeco/pytorch/DistNetworkError.java rename to pytorch/src/gen/java/org/bytedeco/pytorch/ProcessGroupCppCommHookInterface.java index 0e71b470845..3c168378478 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DistNetworkError.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ProcessGroupCppCommHookInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,16 +13,20 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; + // namespace detail - -// Used for errors originating from the TCP/IP stack and not from collective -// libraries. These turn into DistNetworkError when they cross into Python. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class DistNetworkError extends DistError { +// This CppCommHook interface only requires implementing runHook method that +// potentially uses a state. +@Name("c10d::CppCommHookInterface >") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class ProcessGroupCppCommHookInterface extends CommHookInterface { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public DistNetworkError(Pointer p) { super(p); } + public ProcessGroupCppCommHookInterface(Pointer p) { super(p); } + + public native @ByVal Tensor parseHookResult(@Const @ByRef IValue result); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ProcessGroupGloo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ProcessGroupGloo.java new file mode 100644 index 00000000000..ddaa8456620 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ProcessGroupGloo.java @@ -0,0 +1,439 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +// ProcessGroupGloo implements Gloo bindings for c10d. +// +// All functions on this class are expected to be called in the same +// order across processes in the group. This is the only way that we +// can guarantee to match up the same calls across processes. For +// multi-threaded usage of process groups, you can use consider using +// multiple process group instances. +// +// The Gloo algorithms that this class calls into are cached by their +// signature (see description of AlgorithmKey above). This cache works +// as follows: every function call instantiates an AlgorithmKey and +// looks in the cache for existing entries. If there is one, it is +// removed from the cache and returned to the caller. If there are +// none, a new entry is created and returned. If an entry was created +// before, but is still in use, the call will block and wait until the +// entry is returned to the cache. +// +// In the future, we hope to extend this to allow multiple entries per +// key, to enable parallelism for a single key. The number of entries +// per key must always be identical for all processes. This maximum +// number can be automatically tuned, but only if we let a single +// process take charge, and have it broadcast the limits. +// +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class ProcessGroupGloo extends DistributedBackend { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public ProcessGroupGloo(Pointer p) { super(p); } + + // AsyncWork is the Gloo specific superclass for asynchronous work items. + // We can split asynchronous work into 3 phases: + // 1) Sanity checks and prepare input (e.g. memcpy) + // 2) Run operation on background thread + // 3) Synchronize with completion on foreground thread + // + // There is state to be shared between these 3 phases and all of this state + // is captured in the AsyncWork class and its derivatives. + // + // Note: while we are porting operations to use new style collectives, there + // is a split between operations using the existing caching approach and + // operations using the new AsyncWork base class. Over time we will port + // all operations and perform needed cleanup. + // + // FIXME: This probably should be called WorkGloo since the work is executed + // in sync mode by a background thread. + @NoOffset public static class AsyncWork extends Work { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public AsyncWork(Pointer p) { super(p); } + + + public static native void execute(@IntrusivePtr("c10d::ProcessGroupGloo::AsyncWork") @Cast({"", "c10::intrusive_ptr&"}) AsyncWork work); + + public native void run(); + + public native @ByVal TensorVector result(); + + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future getFuture(); + public native @Cast("uint64_t") long getSequencenumber(); + } + + // Wrap c10d store as Gloo store + @NoOffset public static class GlooStore extends Store { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public GlooStore(Pointer p) { super(p); } + + public GlooStore(@IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store) { super((Pointer)null); allocate(store); } + private native void allocate(@IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store); + + public native void setUint(@StdString BytePointer key, @Cast("const std::vector*") @ByRef ByteVector value); + public native void setUint(@StdString String key, @Cast("const std::vector*") @ByRef ByteVector value); + + public native void set(@StdString BytePointer key, @Cast("const std::vector*") @ByRef ByteVector value); + public native void set(@StdString String key, @Cast("const std::vector*") @ByRef ByteVector value); + + public native @ByVal @Cast("std::vector*") ByteVector getUint(@StdString BytePointer key); + public native @ByVal @Cast("std::vector*") ByteVector getUint(@StdString String key); + + public native @ByVal @Cast("std::vector*") ByteVector get(@StdString BytePointer key); + public native @ByVal @Cast("std::vector*") ByteVector get(@StdString String key); + + public native @Name("wait") void _wait(@Const @ByRef StringVector keys); + + public native @Name("wait") void _wait( + @Const @ByRef StringVector keys, + @Const @ByRef Milliseconds timeout); + +// #ifdef GLOO_STORE_HAS_STORE_V2 + public native @Cast("bool") boolean has_v2_support(); + + public native @Cast("std::vector*") @StdVector ByteVector multi_get( + @Const @ByRef StringVector keys); + + public native void multi_set( + @Const @ByRef StringVector keys, + @Cast("std::vector*") @StdVector ByteVector values); + + public native void append(@StdString BytePointer key, @Cast("const std::vector*") @ByRef ByteVector value); + public native void append(@StdString String key, @Cast("const std::vector*") @ByRef ByteVector value); + + public native @Cast("int64_t") long add(@StdString BytePointer key, @Cast("int64_t") long value); + public native @Cast("int64_t") long add(@StdString String key, @Cast("int64_t") long value); + } + + // For send and recv operations there is no need to pass them to the + // thread pool as they are entirely completed by the device thread. + // This work object is used to synchronize completion of the send or + // recv operation. It keeps a reference to the tensor it is + // operating on to prevent it from being deallocated while the + // operation is still in flight. + @NoOffset public static class SendWork extends Work { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public SendWork(Pointer p) { super(p); } + + public SendWork( + @ByRef Tensor tensor, + @UniquePtr org.bytedeco.pytorch.gloo.UnboundBuffer buffer, + @Cast("uint64_t") long seq) { super((Pointer)null); allocate(tensor, buffer, seq); } + private native void allocate( + @ByRef Tensor tensor, + @UniquePtr org.bytedeco.pytorch.gloo.UnboundBuffer buffer, + @Cast("uint64_t") long seq); + + public native @Cast("bool") @Name("wait") boolean _wait(@ByVal(nullValue = "std::chrono::milliseconds(kNoTimeout)") Milliseconds timeout); + public native @Cast("bool") @Name("wait") boolean _wait(); + + public native void abort(); + + public native @Cast("uint64_t") long getSequencenumber(); + } + + @NoOffset public static class RecvWork extends Work { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public RecvWork(Pointer p) { super(p); } + + public RecvWork( + @ByRef Tensor tensor, + @UniquePtr org.bytedeco.pytorch.gloo.UnboundBuffer buffer, + OpType opType, + @Cast("uint64_t") long seq, + @Cast("const char*") BytePointer profilingTitle/*=nullptr*/) { super((Pointer)null); allocate(tensor, buffer, opType, seq, profilingTitle); } + private native void allocate( + @ByRef Tensor tensor, + @UniquePtr org.bytedeco.pytorch.gloo.UnboundBuffer buffer, + OpType opType, + @Cast("uint64_t") long seq, + @Cast("const char*") BytePointer profilingTitle/*=nullptr*/); + public RecvWork( + @ByRef Tensor tensor, + @UniquePtr org.bytedeco.pytorch.gloo.UnboundBuffer buffer, + OpType opType, + @Cast("uint64_t") long seq) { super((Pointer)null); allocate(tensor, buffer, opType, seq); } + private native void allocate( + @ByRef Tensor tensor, + @UniquePtr org.bytedeco.pytorch.gloo.UnboundBuffer buffer, + OpType opType, + @Cast("uint64_t") long seq); + public RecvWork( + @ByRef Tensor tensor, + @UniquePtr org.bytedeco.pytorch.gloo.UnboundBuffer buffer, + @Cast("c10d::OpType") byte opType, + @Cast("uint64_t") long seq, + String profilingTitle/*=nullptr*/) { super((Pointer)null); allocate(tensor, buffer, opType, seq, profilingTitle); } + private native void allocate( + @ByRef Tensor tensor, + @UniquePtr org.bytedeco.pytorch.gloo.UnboundBuffer buffer, + @Cast("c10d::OpType") byte opType, + @Cast("uint64_t") long seq, + String profilingTitle/*=nullptr*/); + public RecvWork( + @ByRef Tensor tensor, + @UniquePtr org.bytedeco.pytorch.gloo.UnboundBuffer buffer, + @Cast("c10d::OpType") byte opType, + @Cast("uint64_t") long seq) { super((Pointer)null); allocate(tensor, buffer, opType, seq); } + private native void allocate( + @ByRef Tensor tensor, + @UniquePtr org.bytedeco.pytorch.gloo.UnboundBuffer buffer, + @Cast("c10d::OpType") byte opType, + @Cast("uint64_t") long seq); + + public native int sourceRank(); + + public native @Cast("bool") @Name("wait") boolean _wait(@ByVal(nullValue = "std::chrono::milliseconds(kNoTimeout)") Milliseconds timeout); + public native @Cast("bool") @Name("wait") boolean _wait(); + + public native void abort(); + + public native @Cast("uint64_t") long getSequencenumber(); + } + + @NoOffset public static class Options extends DistributedBackend.Options { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public Options(Pointer p) { super(p); } + + public Options( + @ByVal(nullValue = "std::chrono::milliseconds(kBackendDefaultTimeout)") Milliseconds timeout) { super((Pointer)null); allocate(timeout); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( + @ByVal(nullValue = "std::chrono::milliseconds(kBackendDefaultTimeout)") Milliseconds timeout); + public Options() { super((Pointer)null); allocate(); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(); + + // return intrusive_ptr of the object + public static native @IntrusivePtr("c10d::ProcessGroupGloo::Options") @Cast({"", "c10::intrusive_ptr&"}) Options create( + @ByVal(nullValue = "std::chrono::milliseconds(kBackendDefaultTimeout)") Milliseconds timeout); + public static native @IntrusivePtr("c10d::ProcessGroupGloo::Options") @Cast({"", "c10::intrusive_ptr&"}) Options create(); + + public native @ByRef GlooDeviceVector devices(); public native Options devices(GlooDeviceVector setter); + public native int threads(); public native Options threads(int setter); + } + + public native @StdString BytePointer getBackendName(); + + // Helper functions to create a new device object. + // They are static functions on this class to keep them logically + // separate from the rest of the code base (e.g. torch/csrc/distributed). + + // Create new device instance for specific interface. + public static native @SharedPtr @ByVal org.bytedeco.pytorch.gloo.Device createDeviceForInterface( + @StdString BytePointer interface_name); + public static native @SharedPtr @ByVal org.bytedeco.pytorch.gloo.Device createDeviceForInterface( + @StdString String interface_name); + + // Create new device instance for specific hostname or address. + public static native @SharedPtr @ByVal org.bytedeco.pytorch.gloo.Device createDeviceForHostname( + @StdString BytePointer hostname); + public static native @SharedPtr @ByVal org.bytedeco.pytorch.gloo.Device createDeviceForHostname( + @StdString String hostname); + + // Create new device instance. + // It tries to resolve this machine's hostname and bind to that address. + // If that fails (i.e. the hostname doesn't resolve to an address), it + // falls back to binding to the loopback address. + public static native @SharedPtr @ByVal org.bytedeco.pytorch.gloo.Device createDefaultDevice(); + + // Create ProcessGroupGloo instance. + + + public ProcessGroupGloo( + @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store, + int rank, + int size, + @IntrusivePtr("c10d::ProcessGroupGloo::Options") @Cast({"", "c10::intrusive_ptr&"}) Options options/*=c10d::ProcessGroupGloo::Options::create()*/) { super((Pointer)null); allocate(store, rank, size, options); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( + @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store, + int rank, + int size, + @IntrusivePtr("c10d::ProcessGroupGloo::Options") @Cast({"", "c10::intrusive_ptr&"}) Options options/*=c10d::ProcessGroupGloo::Options::create()*/); + public ProcessGroupGloo( + @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store, + int rank, + int size) { super((Pointer)null); allocate(store, rank, size); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( + @IntrusivePtr("c10d::Store") @Cast({"", "c10::intrusive_ptr&"}) Store store, + int rank, + int size); + + public native @IntrusivePtr("c10d::ProcessGroupGloo::Options") @Cast({"", "c10::intrusive_ptr&"}) Options getOptions(); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work broadcast( + @ByRef TensorVector tensors, + @Const @ByRef(nullValue = "c10d::BroadcastOptions()") BroadcastOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work broadcast( + @ByRef TensorVector tensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce( + @ByRef TensorVector tensors, + @Const @ByRef(nullValue = "c10d::AllreduceOptions()") AllreduceOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce( + @ByRef TensorVector tensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce_sparse( + @ByRef TensorVector tensors, + @Const @ByRef(nullValue = "c10d::AllreduceOptions()") AllreduceOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce_sparse( + @ByRef TensorVector tensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce_coalesced( + @ByRef TensorVector tensors, + @Const @ByRef(nullValue = "c10d::AllreduceCoalescedOptions()") AllreduceCoalescedOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allreduce_coalesced( + @ByRef TensorVector tensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce( + @ByRef TensorVector tensors, + @Const @ByRef(nullValue = "c10d::ReduceOptions()") ReduceOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce( + @ByRef TensorVector tensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _reduce_scatter_base( + @ByRef Tensor outputTensor, + @ByRef Tensor inputTensor, + @Const @ByRef(nullValue = "c10d::ReduceScatterOptions()") ReduceScatterOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _reduce_scatter_base( + @ByRef Tensor outputTensor, + @ByRef Tensor inputTensor); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _allgather_base( + @ByRef Tensor output_tensor, + @ByRef Tensor input_tensor, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work _allgather_base( + @ByRef Tensor output_tensor, + @ByRef Tensor input_tensor); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather( + @StdVector TensorVector outputs, + @ByRef TensorVector inputs, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather( + @StdVector TensorVector outputs, + @ByRef TensorVector inputs); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_coalesced( + @StdVector TensorVector output_lists, + @ByRef TensorVector input_list, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_coalesced( + @StdVector TensorVector output_lists, + @ByRef TensorVector input_list); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_into_tensor_coalesced( + @ByRef TensorVector outputs, + @ByRef TensorVector inputs, + @Const @ByRef(nullValue = "c10d::AllgatherOptions()") AllgatherOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work allgather_into_tensor_coalesced( + @ByRef TensorVector outputs, + @ByRef TensorVector inputs); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work gather( + @StdVector TensorVector outputs, + @ByRef TensorVector inputs, + @Const @ByRef(nullValue = "c10d::GatherOptions()") GatherOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work gather( + @StdVector TensorVector outputs, + @ByRef TensorVector inputs); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work scatter( + @ByRef TensorVector outputs, + @StdVector TensorVector inputs, + @Const @ByRef(nullValue = "c10d::ScatterOptions()") ScatterOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work scatter( + @ByRef TensorVector outputs, + @StdVector TensorVector inputs); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter( + @ByRef TensorVector outputs, + @StdVector TensorVector inputs, + @Const @ByRef(nullValue = "c10d::ReduceScatterOptions()") ReduceScatterOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter( + @ByRef TensorVector outputs, + @StdVector TensorVector inputs); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter_tensor_coalesced( + @ByRef TensorVector outputTensors, + @ByRef TensorVector inputTensors, + @Const @ByRef(nullValue = "c10d::ReduceScatterOptions()") ReduceScatterOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work reduce_scatter_tensor_coalesced( + @ByRef TensorVector outputTensors, + @ByRef TensorVector inputTensors); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work alltoall_base( + @ByRef Tensor outputTensor, + @ByRef Tensor inputTensor, + @Cast("std::vector*") @ByRef LongVector outputCounts, + @Cast("std::vector*") @ByRef LongVector inputCounts, + @Const @ByRef(nullValue = "c10d::AllToAllOptions()") AllToAllOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work alltoall_base( + @ByRef Tensor outputTensor, + @ByRef Tensor inputTensor, + @Cast("std::vector*") @ByRef LongVector outputCounts, + @Cast("std::vector*") @ByRef LongVector inputCounts); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work send( + @ByRef TensorVector tensors, + int dstRank, + int tag); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work recv( + @ByRef TensorVector tensors, + int srcRank, + int tag); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work recvAnysource( + @ByRef TensorVector tensors, + int tag); + + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work barrier( + @Const @ByRef(nullValue = "c10d::BarrierOptions()") BarrierOptions opts); + public native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work barrier(); + + public native void enableCollectivesTiming(); + + public native @UniquePtr org.bytedeco.pytorch.gloo.Store _getStore(); + + // Similar to barrier(), but blocks rank 0 until all other ranks have + // acknowledged that they are alive (through send/recv from rank 0). Rank 0 + // is able to report all failed ranks if waitAllRanks = true, otherwise + // reports the first rank it detected as failed. + public native void monitoredBarrier( + @Const @ByRef(nullValue = "c10d::BarrierOptions()") BarrierOptions opts, + @Cast("bool") boolean waitAllRanks/*=false*/); + public native void monitoredBarrier(); + + // Agrees on an initial sequence number for the whole group by having rank 0 + // create it and broadcast it to other ranks using the store. + public native void setSequenceNumberForGroup(); + + // Retrieves the current sequence number for the whole group, which should be + // in sync. If the returned number is not consistent across the group, it + // may indicate that there is some sort of collective desynchronization. + public native @Cast("uint64_t") long getSequenceNumberForGroup(); + + public native int getNumThreads(); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ProfileIValueOp.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ProfileIValueOp.java index 2830a978b8b..85fb4b26b9e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ProfileIValueOp.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ProfileIValueOp.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ProfilerConfig.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ProfilerConfig.java index e5bef484936..afc301a4d69 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ProfilerConfig.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ProfilerConfig.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Property.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Property.java index 1e658418524..1d9f55c297a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Property.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Property.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -26,8 +27,8 @@ public class Property extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Property(Pointer p) { super(p); } - public Property(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Property(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Ident name(); public native @ByVal Def getter(); public native @ByVal DefMaybe setter(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyList.java index 7de4d3eb0ba..cec4f8b13db 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class PropertyList extends TreeView { public PropertyList(Pointer p) { super(p); } - public PropertyList(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public PropertyList(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal @Cast("torch::jit::List::iterator*") PropertyListIterator begin(); public native @ByVal @Cast("torch::jit::List::iterator*") PropertyListIterator end(); public native @Cast("bool") boolean empty(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyListIterator.java index 74af287e720..87b6657e1ae 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class PropertyListIterator extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public PropertyListIterator(Pointer p) { super(p); } - public PropertyListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it) { super((Pointer)null); allocate(it); } - private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it); + public PropertyListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it) { super((Pointer)null); allocate(it); } + private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it); public native @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef PropertyListIterator rhs); public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef PropertyListIterator rhs); public native @ByVal @Name("operator *") Property multiply(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyListMaybe.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyListMaybe.java index 286d9661af2..1060abc8e61 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyListMaybe.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyListMaybe.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class PropertyListMaybe extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public PropertyListMaybe(Pointer p) { super(p); } - public PropertyListMaybe(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public PropertyListMaybe(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); /* implicit */ public PropertyListMaybe(@Const @ByRef PropertyList tree) { super((Pointer)null); allocate(tree); } private native void allocate(@Const @ByRef PropertyList tree); public native @Cast("bool") boolean present(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyVector.java index 83f91b11ca8..8375ccee67f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PropertyVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PyInterpreter.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PyInterpreter.java index e0fc645caf0..5929c340bd5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PyInterpreter.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PyInterpreter.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PyInterpreterVTable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PyInterpreterVTable.java index ffdc0777002..5c679ffc74b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PyInterpreterVTable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PyInterpreterVTable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -123,7 +124,7 @@ public class PyInterpreterVTable extends Pointer { // Perform a detach by deferring to the __torch_dispatch__ implementation of // detach, which will also arrange for the PyObject to get copied in this // situation - public native @ByVal TensorImplPtr detach( + public native @IntrusivePtr("c10::TensorImpl") @Cast({"", "c10::intrusive_ptr&"}) TensorImpl detach( @Const TensorImpl self); // Invoke the Python boxed fallback dispatch to go back into Python @@ -139,11 +140,15 @@ public class PyInterpreterVTable extends Pointer { public native void python_op_registration_trampoline( @Const @ByRef OperatorHandle op, DispatchKey arg1, - IValueVector stack); + @ByVal DispatchKeySet keyset, + IValueVector stack, + @Cast("bool") boolean with_keyset); public native void python_op_registration_trampoline( @Const @ByRef OperatorHandle op, @Cast("c10::DispatchKey") short arg1, - IValueVector stack); + @ByVal DispatchKeySet keyset, + IValueVector stack, + @Cast("bool") boolean with_keyset); public native void throw_abstract_impl_not_imported_error( @StdString BytePointer opname, @@ -174,16 +179,68 @@ public native void python_dispatcher( public native @ByVal SymIntArrayRef sym_strides(@Const TensorImpl self); public native @ByVal SymInt sym_storage_offset(@Const TensorImpl self); - public native void trace_gpu_event_creation(@Cast("uintptr_t") long event); - public native void trace_gpu_event_deletion(@Cast("uintptr_t") long event); - public native void trace_gpu_event_record(@Cast("uintptr_t") long event, @Cast("uintptr_t") long stream); - public native void trace_gpu_event_wait(@Cast("uintptr_t") long event, @Cast("uintptr_t") long stream); - public native void trace_gpu_memory_allocation(@Cast("uintptr_t") long ptr); - public native void trace_gpu_memory_deallocation(@Cast("uintptr_t") long ptr); - public native void trace_gpu_stream_creation(@Cast("uintptr_t") long stream); - public native void trace_gpu_device_synchronization(); - public native void trace_gpu_stream_synchronization(@Cast("uintptr_t") long stream); - public native void trace_gpu_event_synchronization(@Cast("uintptr_t") long event); + public native void trace_gpu_event_creation( + DeviceType device_type, + @Cast("uintptr_t") long event); + public native void trace_gpu_event_creation( + @Cast("c10::DeviceType") byte device_type, + @Cast("uintptr_t") long event); + public native void trace_gpu_event_deletion( + DeviceType device_type, + @Cast("uintptr_t") long event); + public native void trace_gpu_event_deletion( + @Cast("c10::DeviceType") byte device_type, + @Cast("uintptr_t") long event); + public native void trace_gpu_event_record( + DeviceType device_type, + @Cast("uintptr_t") long event, + @Cast("uintptr_t") long stream); + public native void trace_gpu_event_record( + @Cast("c10::DeviceType") byte device_type, + @Cast("uintptr_t") long event, + @Cast("uintptr_t") long stream); + public native void trace_gpu_event_wait( + DeviceType device_type, + @Cast("uintptr_t") long event, + @Cast("uintptr_t") long stream); + public native void trace_gpu_event_wait( + @Cast("c10::DeviceType") byte device_type, + @Cast("uintptr_t") long event, + @Cast("uintptr_t") long stream); + public native void trace_gpu_memory_allocation( + DeviceType device_type, + @Cast("uintptr_t") long ptr); + public native void trace_gpu_memory_allocation( + @Cast("c10::DeviceType") byte device_type, + @Cast("uintptr_t") long ptr); + public native void trace_gpu_memory_deallocation( + DeviceType device_type, + @Cast("uintptr_t") long ptr); + public native void trace_gpu_memory_deallocation( + @Cast("c10::DeviceType") byte device_type, + @Cast("uintptr_t") long ptr); + public native void trace_gpu_stream_creation( + DeviceType device_type, + @Cast("uintptr_t") long stream); + public native void trace_gpu_stream_creation( + @Cast("c10::DeviceType") byte device_type, + @Cast("uintptr_t") long stream); + public native void trace_gpu_device_synchronization( + DeviceType device_type); + public native void trace_gpu_device_synchronization( + @Cast("c10::DeviceType") byte device_type); + public native void trace_gpu_stream_synchronization( + DeviceType device_type, + @Cast("uintptr_t") long stream); + public native void trace_gpu_stream_synchronization( + @Cast("c10::DeviceType") byte device_type, + @Cast("uintptr_t") long stream); + public native void trace_gpu_event_synchronization( + DeviceType device_type, + @Cast("uintptr_t") long event); + public native void trace_gpu_event_synchronization( + @Cast("c10::DeviceType") byte device_type, + @Cast("uintptr_t") long event); public native void reset_backward_hooks(@Const TensorImpl self); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectHolder.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectHolder.java index f7716d9b2b5..d3a22121bd7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectHolder.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectHolder.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -29,7 +30,7 @@ public class PyObjectHolder extends Pointer { public native @Cast("PyObject*") Pointer getPyObject(); public native @ByVal InferredType tryToInferType(); - public native @ByVal IValue toIValue(@Const @ByRef Type.TypePtr type, @ByVal(nullValue = "c10::optional(c10::nullopt)") IntOptional N); + public native @ByVal IValue toIValue(@Const @ByRef Type.TypePtr type, @ByVal(nullValue = "std::optional(c10::nullopt)") IntOptional N); public native @ByVal IValue toIValue(@Const @ByRef Type.TypePtr type); public native @StdString BytePointer toStr(); public native @ByVal TensorVector extractTensors(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectHolderPtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectHolderPtr.java deleted file mode 100644 index 3c6e5c71da5..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectHolderPtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class PyObjectHolderPtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public PyObjectHolderPtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public PyObjectHolderPtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public PyObjectHolderPtr position(long position) { - return (PyObjectHolderPtr)super.position(position); - } - @Override public PyObjectHolderPtr getPointer(long i) { - return new PyObjectHolderPtr((Pointer)this).offsetAddress(i); - } - - - public PyObjectHolderPtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public PyObjectHolderPtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public PyObjectHolderPtr(PyObjectHolder target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(PyObjectHolder target, @ByVal DontIncreaseRefcount arg1); - - - - public PyObjectHolderPtr(@ByRef(true) PyObjectHolderPtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) PyObjectHolderPtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) PyObjectHolderPtr put(@ByRef(true) PyObjectHolderPtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) PyObjectHolder get(); - - public native @ByRef @Name("operator *") @NoException(true) PyObjectHolder multiply(); - - public native @Name("operator ->") @NoException(true) PyObjectHolder access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef PyObjectHolderPtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) PyObjectHolder release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal PyObjectHolderPtr reclaim(PyObjectHolder owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal PyObjectHolderPtr reclaim_copy(PyObjectHolder owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal PyObjectHolderPtr unsafe_steal_from_new(PyObjectHolder raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal PyObjectHolderPtr unsafe_adapt_non_heap_allocated( - PyObjectHolder raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal PyObjectHolderPtr unsafe_reclaim_from_nonowning(PyObjectHolder raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectType.java index 982f30e59fc..b158e9680c8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectTypePtr.java index 236e94e29f8..5c2d0fd3d83 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PyObjectTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PyObject_TorchDispatchMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PyObject_TorchDispatchMode.java new file mode 100644 index 00000000000..603f1d5f35b --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PyObject_TorchDispatchMode.java @@ -0,0 +1,36 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +// A newtype wrapper around SafePyObject for type safety when a python object +// represents a specific type. Note that `T` is only used as a tag and isn't +// actually used for any true purpose. +@Name("c10::SafePyObjectT") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class PyObject_TorchDispatchMode extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public PyObject_TorchDispatchMode(Pointer p) { super(p); } + + public PyObject_TorchDispatchMode(@Cast("PyObject*") Pointer data, PyInterpreter pyinterpreter) { super((Pointer)null); allocate(data, pyinterpreter); } + private native void allocate(@Cast("PyObject*") Pointer data, PyInterpreter pyinterpreter); + + + +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PyObject_TorchDispatchModeOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PyObject_TorchDispatchModeOptional.java new file mode 100644 index 00000000000..ec41f490085 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PyObject_TorchDispatchModeOptional.java @@ -0,0 +1,36 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@NoOffset @Name("std::optional > >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class PyObject_TorchDispatchModeOptional extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public PyObject_TorchDispatchModeOptional(Pointer p) { super(p); } + public PyObject_TorchDispatchModeOptional(PyObject_TorchDispatchMode value) { this(); put(value); } + public PyObject_TorchDispatchModeOptional() { allocate(); } + private native void allocate(); + public native @Name("operator =") @ByRef PyObject_TorchDispatchModeOptional put(@ByRef PyObject_TorchDispatchModeOptional x); + + public native boolean has_value(); + public native void reset(); + public native @Name("value") @SharedPtr("c10::SafePyObjectT") PyObject_TorchDispatchMode get(); + @ValueSetter public native PyObject_TorchDispatchModeOptional put(@SharedPtr("c10::SafePyObjectT") PyObject_TorchDispatchMode value); +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PyTorchStreamReader.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PyTorchStreamReader.java index 33519a27a09..6da53a21b22 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PyTorchStreamReader.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PyTorchStreamReader.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PythonDispatcherTLS.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PythonDispatcherTLS.java index ea87033da3b..5896600e1ed 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PythonDispatcherTLS.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PythonDispatcherTLS.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PythonOp.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PythonOp.java index b387d8eb4cb..fe369d8eeba 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PythonOp.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PythonOp.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/PythonTorchFunctionTLS.java b/pytorch/src/gen/java/org/bytedeco/pytorch/PythonTorchFunctionTLS.java index 33e525e06ff..f7b575a06d2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/PythonTorchFunctionTLS.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/PythonTorchFunctionTLS.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/QEngineVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/QEngineVector.java index c0de4d12594..bd23fd67d35 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/QEngineVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/QEngineVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/QSchemeType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/QSchemeType.java index de1db939eac..3a23f184c2a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/QSchemeType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/QSchemeType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/QSchemeTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/QSchemeTypePtr.java index 728ced32907..7c7bada4d49 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/QSchemeTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/QSchemeTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/QTensorImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/QTensorImpl.java index e57e5a8668e..1c45b4baf42 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/QTensorImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/QTensorImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/QualifiedName.java b/pytorch/src/gen/java/org/bytedeco/pytorch/QualifiedName.java index 52516bf8eb0..6560613f1f3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/QualifiedName.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/QualifiedName.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/QualifiedNameOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/QualifiedNameOptional.java index ccb808c235e..c751af06bee 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/QualifiedNameOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/QualifiedNameOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class QualifiedNameOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Quantizer.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Quantizer.java index af4d2184621..df22782da48 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Quantizer.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Quantizer.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -48,10 +49,11 @@ public class Quantizer extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Quantizer(Pointer p) { super(p); } + // NOLINTNEXTLINE(cppcoreguidelines-avoid-const-or-ref-data-members) @MemberGetter public native ScalarType scalar_type_(); // Copied from torch/csrc/jit/ir/scope.h - public native @ByVal QuantizerPtr intrusive_from_this(); + public native @IntrusivePtr("at::Quantizer") @Cast({"", "c10::intrusive_ptr&"}) Quantizer intrusive_from_this(); /** * Each concrete Quantizer type should have a unique QScheme type. @@ -78,5 +80,5 @@ public class Quantizer extends Pointer { /** * Compare against {@code other} for equality. */ - public native @Cast("bool") boolean equalTo(@ByVal QuantizerPtr other); + public native @Cast("bool") boolean equalTo(@IntrusivePtr("at::Quantizer") @Cast({"", "c10::intrusive_ptr&"}) Quantizer other); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/QuantizerPtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/QuantizerPtr.java deleted file mode 100644 index fa1f8539ca3..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/QuantizerPtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - // namespace detail - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class QuantizerPtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public QuantizerPtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public QuantizerPtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public QuantizerPtr position(long position) { - return (QuantizerPtr)super.position(position); - } - @Override public QuantizerPtr getPointer(long i) { - return new QuantizerPtr((Pointer)this).offsetAddress(i); - } - - - public QuantizerPtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public QuantizerPtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public QuantizerPtr(Quantizer target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(Quantizer target, @ByVal DontIncreaseRefcount arg1); - - - - public QuantizerPtr(@ByRef(true) QuantizerPtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) QuantizerPtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) QuantizerPtr put(@ByRef(true) QuantizerPtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) Quantizer get(); - - public native @ByRef @Name("operator *") @NoException(true) Quantizer multiply(); - - public native @Name("operator ->") @NoException(true) Quantizer access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef QuantizerPtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) Quantizer release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal QuantizerPtr reclaim(Quantizer owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal QuantizerPtr reclaim_copy(Quantizer owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal QuantizerPtr unsafe_steal_from_new(Quantizer raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal QuantizerPtr unsafe_adapt_non_heap_allocated( - Quantizer raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal QuantizerPtr unsafe_reclaim_from_nonowning(Quantizer raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/QuantizerType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/QuantizerType.java index b9b86e2bd23..79a6fb11fb1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/QuantizerType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/QuantizerType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/QuantizerTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/QuantizerTypePtr.java index d0827cf7ecf..152429f49b1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/QuantizerTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/QuantizerTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RMSprop.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RMSprop.java index f7fa66c1fe9..fb39feade4e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RMSprop.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RMSprop.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RMSpropOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RMSpropOptions.java index bcca215615b..cb2531958c5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RMSpropOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RMSpropOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace torch diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RMSpropParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RMSpropParamState.java index 9f73ca865de..af42e90fab6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RMSpropParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RMSpropParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNBaseMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNBaseMode.java index 749d618fb5f..00ea6beff5f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNBaseMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNBaseMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImpl.java index cd339533abd..2c8e52b7234 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** An Elman RNN cell with tanh or ReLU non-linearity. - * See https://pytorch.org/docs/master/nn.html#torch.nn.RNNCell to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.RNNCell to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::RNNCellOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImplBase.java index d583b23f38c..b22092ba16b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImplCloneable.java index 3522281f3a9..a58e1c1f3e9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class RNNCellImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellOptions.java index 482658321fd..d3a04bda289 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellOptionsBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellOptionsBase.java index 08fd5763686..4e1019cf99c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellOptionsBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNCellOptionsBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImpl.java index 05783f8520a..702775cccf3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ RNN ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** A multi-layer Elman RNN module with Tanh or ReLU activation. - * See https://pytorch.org/docs/master/generated/torch.nn.RNN.html to learn + * See https://pytorch.org/docs/main/generated/torch.nn.RNN.html to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::RNNOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImplBase.java index bce83d6acf2..8eb3aa2350b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImplCloneable.java index 776ee61e974..c017a7ddf13 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class RNNImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNNonlinearity.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNNonlinearity.java index e48de79a201..bdc907438cb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNNonlinearity.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNNonlinearity.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNOptions.java index 8081ee7e0ad..c4e256e2061 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNOptionsBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNOptionsBase.java index 73a37d4191c..09929e6b500 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RNNOptionsBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RNNOptionsBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ROCmBackwardPassGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ROCmBackwardPassGuard.java index be8c1517090..fbdc76a18d8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ROCmBackwardPassGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ROCmBackwardPassGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUFuncOptions.java index 7e6ed3820a7..08fcd746644 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUImpl.java index 62a8fd6ea67..4469701b13e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ RReLU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the RReLU function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.RReLU to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.RReLU to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::RReLUOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUImplCloneable.java index 608e5eeb68c..d161895294d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class RReLUImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUOptions.java index fe86aa3baf1..b5e96aa6bd3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RReLUOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RRefInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RRefInterface.java index d43af897b95..aa5700f01d9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RRefInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RRefInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RRefInterfacePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RRefInterfacePtr.java deleted file mode 100644 index 4a230f04d3c..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RRefInterfacePtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class RRefInterfacePtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public RRefInterfacePtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public RRefInterfacePtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public RRefInterfacePtr position(long position) { - return (RRefInterfacePtr)super.position(position); - } - @Override public RRefInterfacePtr getPointer(long i) { - return new RRefInterfacePtr((Pointer)this).offsetAddress(i); - } - - - public RRefInterfacePtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public RRefInterfacePtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public RRefInterfacePtr(RRefInterface target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(RRefInterface target, @ByVal DontIncreaseRefcount arg1); - - - - public RRefInterfacePtr(@ByRef(true) RRefInterfacePtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) RRefInterfacePtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) RRefInterfacePtr put(@ByRef(true) RRefInterfacePtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) RRefInterface get(); - - public native @ByRef @Name("operator *") @NoException(true) RRefInterface multiply(); - - public native @Name("operator ->") @NoException(true) RRefInterface access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef RRefInterfacePtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) RRefInterface release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal RRefInterfacePtr reclaim(RRefInterface owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal RRefInterfacePtr reclaim_copy(RRefInterface owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal RRefInterfacePtr unsafe_steal_from_new(RRefInterface raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal RRefInterfacePtr unsafe_adapt_non_heap_allocated( - RRefInterface raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal RRefInterfacePtr unsafe_reclaim_from_nonowning(RRefInterface raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RRefSingleElementType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RRefSingleElementType.java index 88a29c8ba2f..90f2a324816 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RRefSingleElementType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RRefSingleElementType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RRefType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RRefType.java index 530d169fedb..6231933d816 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RRefType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RRefType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Raise.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Raise.java index dd027cacb29..3e08ecd1340 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Raise.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Raise.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Raise extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Raise(Pointer p) { super(p); } - public Raise(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Raise(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr expr(); public static native @ByVal Raise create(@Const @ByRef SourceRange range, @Const @ByRef Expr expr); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RandomSampler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RandomSampler.java index 1e5d1839f17..f0c11a6e65a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RandomSampler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RandomSampler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -36,7 +37,7 @@ public class RandomSampler extends Sampler { private native void allocate(@Cast("int64_t") long size); /** Resets the {@code RandomSampler} to a new set of indices. */ - public native void reset(@ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional new_size); + public native void reset(@ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional new_size); public native void reset(); /** Returns the next batch of indices. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RangeValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RangeValue.java index 9738f81a748..5e25ad746b2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RangeValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RangeValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -29,12 +30,12 @@ public RangeValue( @Const @ByRef SourceRange loc, @ByRef GraphFunction m, @ByVal ValueVector input, - @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional static_len) { super((Pointer)null); allocate(loc, m, input, static_len); } + @ByVal(nullValue = "std::optional(c10::nullopt)") LongOptional static_len) { super((Pointer)null); allocate(loc, m, input, static_len); } private native void allocate( @Const @ByRef SourceRange loc, @ByRef GraphFunction m, @ByVal ValueVector input, - @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional static_len); + @ByVal(nullValue = "std::optional(c10::nullopt)") LongOptional static_len); public RangeValue( @Const @ByRef SourceRange loc, @ByRef GraphFunction m, diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6Impl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6Impl.java index c6e998fd3b1..8f00e473124 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6Impl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6Impl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ReLU6 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the ReLU6 function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ReLU6 to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.ReLU6 to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::ReLU6Options} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6ImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6ImplCloneable.java index 3869b5543be..ba3a217f6c7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6ImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6ImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ReLU6ImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6Options.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6Options.java index bd301894d27..26a163ff1d0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6Options.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLU6Options.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUImpl.java index 611e4434ba2..f6cd6ff2f86 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ReLU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the ReLU function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ReLU to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.ReLU to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::ReLUOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUImplCloneable.java index 97c8c8e0155..f497b150897 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ReLUImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUOptions.java index 4629f8fcfc6..86e6d0356f0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReLUOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReadAdapterInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReadAdapterInterface.java index 6ea233b1063..cf9385f8527 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReadAdapterInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReadAdapterInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReadAdapterInterfaceVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReadAdapterInterfaceVector.java index 08c6e30df71..bbf6b51e04c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReadAdapterInterfaceVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReadAdapterInterfaceVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunction.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunction.java index ee9a588789a..51d3d67dd1d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunction.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunction.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -57,6 +58,8 @@ public class RecordFunction extends Pointer { public native @ByVal IValueArrayRef inputs(); + public native @ByVal StringIValueMap kwinputs(); + public native @Const @ByRef IValueVector outputs(); public native void setOutputs(@ByRef(true) IValueVector outputs); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionCallbacksEntry.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionCallbacksEntry.java index a82b1646554..bf7286a93ca 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionCallbacksEntry.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionCallbacksEntry.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionGuard.java index 877c9c96092..8c69be05197 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionHandleIntList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionHandleIntList.java index 929fc1c030e..69153adbe2b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionHandleIntList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionHandleIntList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionHandleIntPair.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionHandleIntPair.java index 0f381b8882f..6ea21bcf37e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionHandleIntPair.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionHandleIntPair.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionTLS.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionTLS.java index 04ca80241ee..f06393a580d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionTLS.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RecordFunctionTLS.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceLROnPlateauScheduler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceLROnPlateauScheduler.java new file mode 100644 index 00000000000..9f94f0c3bc3 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceLROnPlateauScheduler.java @@ -0,0 +1,182 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("torch::optim") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class ReduceLROnPlateauScheduler extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public ReduceLROnPlateauScheduler(Pointer p) { super(p); } + + public enum SchedulerMode { min(0), max(1); + + public final int value; + private SchedulerMode(int v) { this.value = v; } + private SchedulerMode(SchedulerMode e) { this.value = e.value; } + public SchedulerMode intern() { for (SchedulerMode e : values()) if (e.value == value) return e; return this; } + @Override public String toString() { return intern().name(); } + } + public enum ThresholdMode { rel(0), abs(1); + + public final int value; + private ThresholdMode(int v) { this.value = v; } + private ThresholdMode(ThresholdMode e) { this.value = e.value; } + public ThresholdMode intern() { for (ThresholdMode e : values()) if (e.value == value) return e; return this; } + @Override public String toString() { return intern().name(); } + } + public ReduceLROnPlateauScheduler( + @ByRef Optimizer optimizer, + SchedulerMode mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + ThresholdMode threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector FloatPointer min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/) { super((Pointer)null); allocate(optimizer, mode, factor, patience, threshold, threshold_mode, cooldown, min_lr, eps, verbose); } + private native void allocate( + @ByRef Optimizer optimizer, + SchedulerMode mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + ThresholdMode threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector FloatPointer min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/); + public ReduceLROnPlateauScheduler( + @ByRef Optimizer optimizer) { super((Pointer)null); allocate(optimizer); } + private native void allocate( + @ByRef Optimizer optimizer); + public ReduceLROnPlateauScheduler( + @ByRef Optimizer optimizer, + @Cast("torch::optim::ReduceLROnPlateauScheduler::SchedulerMode") int mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + @Cast("torch::optim::ReduceLROnPlateauScheduler::ThresholdMode") int threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector FloatBuffer min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/) { super((Pointer)null); allocate(optimizer, mode, factor, patience, threshold, threshold_mode, cooldown, min_lr, eps, verbose); } + private native void allocate( + @ByRef Optimizer optimizer, + @Cast("torch::optim::ReduceLROnPlateauScheduler::SchedulerMode") int mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + @Cast("torch::optim::ReduceLROnPlateauScheduler::ThresholdMode") int threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector FloatBuffer min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/); + public ReduceLROnPlateauScheduler( + @ByRef Optimizer optimizer, + SchedulerMode mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + ThresholdMode threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector float[] min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/) { super((Pointer)null); allocate(optimizer, mode, factor, patience, threshold, threshold_mode, cooldown, min_lr, eps, verbose); } + private native void allocate( + @ByRef Optimizer optimizer, + SchedulerMode mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + ThresholdMode threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector float[] min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/); + public ReduceLROnPlateauScheduler( + @ByRef Optimizer optimizer, + @Cast("torch::optim::ReduceLROnPlateauScheduler::SchedulerMode") int mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + @Cast("torch::optim::ReduceLROnPlateauScheduler::ThresholdMode") int threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector FloatPointer min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/) { super((Pointer)null); allocate(optimizer, mode, factor, patience, threshold, threshold_mode, cooldown, min_lr, eps, verbose); } + private native void allocate( + @ByRef Optimizer optimizer, + @Cast("torch::optim::ReduceLROnPlateauScheduler::SchedulerMode") int mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + @Cast("torch::optim::ReduceLROnPlateauScheduler::ThresholdMode") int threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector FloatPointer min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/); + public ReduceLROnPlateauScheduler( + @ByRef Optimizer optimizer, + SchedulerMode mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + ThresholdMode threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector FloatBuffer min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/) { super((Pointer)null); allocate(optimizer, mode, factor, patience, threshold, threshold_mode, cooldown, min_lr, eps, verbose); } + private native void allocate( + @ByRef Optimizer optimizer, + SchedulerMode mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + ThresholdMode threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector FloatBuffer min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/); + public ReduceLROnPlateauScheduler( + @ByRef Optimizer optimizer, + @Cast("torch::optim::ReduceLROnPlateauScheduler::SchedulerMode") int mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + @Cast("torch::optim::ReduceLROnPlateauScheduler::ThresholdMode") int threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector float[] min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/) { super((Pointer)null); allocate(optimizer, mode, factor, patience, threshold, threshold_mode, cooldown, min_lr, eps, verbose); } + private native void allocate( + @ByRef Optimizer optimizer, + @Cast("torch::optim::ReduceLROnPlateauScheduler::SchedulerMode") int mode/*=torch::optim::ReduceLROnPlateauScheduler::min*/, + float factor/*=0.1*/, + int patience/*=10*/, + double threshold/*=1e-4*/, + @Cast("torch::optim::ReduceLROnPlateauScheduler::ThresholdMode") int threshold_mode/*=torch::optim::ReduceLROnPlateauScheduler::rel*/, + int cooldown/*=0*/, + @StdVector float[] min_lr/*=std::vector()*/, + double eps/*=1e-8*/, + @Cast("bool") boolean verbose/*=false*/); + + public native void step(float metric); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceOp.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceOp.java new file mode 100644 index 00000000000..33564e5c685 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceOp.java @@ -0,0 +1,104 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +// Other ReduceOps that need different supplementary data can also +// derive from _SupplementBase. +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class ReduceOp extends CustomClassHolder { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public ReduceOp(Pointer p) { super(p); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public ReduceOp(long size) { super((Pointer)null); allocateArray(size); } + private native void allocateArray(long size); + @Override public ReduceOp position(long position) { + return (ReduceOp)super.position(position); + } + @Override public ReduceOp getPointer(long i) { + return new ReduceOp((Pointer)this).offsetAddress(i); + } + + // note(crcrpar): RedOpType could be defined outside of `ReduceOp` + public enum RedOpType { + SUM((byte)(0)), + AVG((byte)(1)), + PRODUCT((byte)(2)), + MIN((byte)(3)), + MAX((byte)(4)), + BAND((byte)(5)), // Bitwise AND + BOR((byte)(6)), // Bitwise OR + BXOR((byte)(7)), // Bitwise XOR + PREMUL_SUM((byte)(8)), // Multiply by a user-supplied constant before summing. + UNUSED((byte)(9)); + + public final byte value; + private RedOpType(byte v) { this.value = v; } + private RedOpType(RedOpType e) { this.value = e.value; } + public RedOpType intern() { for (RedOpType e : values()) if (e.value == value) return e; return this; } + @Override public String toString() { return intern().name(); } + } + + public ReduceOp() { super((Pointer)null); allocate(); } + private native void allocate(); + + public ReduceOp(RedOpType op) { super((Pointer)null); allocate(op); } + private native void allocate(RedOpType op); + public ReduceOp(@Cast("c10d::ReduceOp::RedOpType") byte op) { super((Pointer)null); allocate(op); } + private native void allocate(@Cast("c10d::ReduceOp::RedOpType") byte op); + + public ReduceOp( + RedOpType op, + @IntrusivePtr("c10d::_SupplementBase") @Cast({"", "c10::intrusive_ptr&"}) _SupplementBase optional_supplement) { super((Pointer)null); allocate(op, optional_supplement); } + private native void allocate( + RedOpType op, + @IntrusivePtr("c10d::_SupplementBase") @Cast({"", "c10::intrusive_ptr&"}) _SupplementBase optional_supplement); + public ReduceOp( + @Cast("c10d::ReduceOp::RedOpType") byte op, + @IntrusivePtr("c10d::_SupplementBase") @Cast({"", "c10::intrusive_ptr&"}) _SupplementBase optional_supplement) { super((Pointer)null); allocate(op, optional_supplement); } + private native void allocate( + @Cast("c10d::ReduceOp::RedOpType") byte op, + @IntrusivePtr("c10d::_SupplementBase") @Cast({"", "c10::intrusive_ptr&"}) _SupplementBase optional_supplement); + + // The heap resource supplement_, if it exists, is managed by a + // c10::intrusive_ptr, so constructors and operator= can be simple + public ReduceOp(@Const @ByRef ReduceOp other) { super((Pointer)null); allocate(other); } + private native void allocate(@Const @ByRef ReduceOp other); + public native @ByRef @Name("operator =") ReduceOp put(@Const @ByRef ReduceOp other); + + public native @Name("operator c10d::ReduceOp::RedOpType") RedOpType asRedOpType(); + + public native @Cast("bool") @Name("operator ==") boolean equals(@Cast("const std::uint8_t") byte other); + + public native @Cast("bool") @Name("operator ==") boolean equals(RedOpType other); + + // todo(crcrpar): Handle `RedOpType::PREMUL_SUM` with its scaling factor. + public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef ReduceOp other); + + public native RedOpType op_(); public native ReduceOp op_(RedOpType setter); + // supplement_ is "type-erased" storage for optional supplementary + // data the op might need. + // The point of use will know the derived type supplement_ really is, + // and downcast its pointer to extract the data as the needed type(s). + // Right now, only PREMUL_SUM needs supplementary data, but the same + // mechanism could extend to support other nontrivial reduce ops with + // different supplementary payloads. + public native @IntrusivePtr("c10d::_SupplementBase") @Cast({"", "c10::intrusive_ptr&"}) _SupplementBase supplement_(); public native ReduceOp supplement_(_SupplementBase setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceOptions.java new file mode 100644 index 00000000000..a138dfdf37e --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceOptions.java @@ -0,0 +1,44 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class ReduceOptions extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public ReduceOptions() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public ReduceOptions(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public ReduceOptions(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public ReduceOptions position(long position) { + return (ReduceOptions)super.position(position); + } + @Override public ReduceOptions getPointer(long i) { + return new ReduceOptions((Pointer)this).offsetAddress(i); + } + + public native @ByRef @NoOffset ReduceOp reduceOp(); public native ReduceOptions reduceOp(ReduceOp setter); + public native @Cast("int64_t") @NoOffset long rootRank(); public native ReduceOptions rootRank(long setter); + public native @Cast("int64_t") @NoOffset long rootTensor(); public native ReduceOptions rootTensor(long setter); + public native @ByRef @NoOffset Milliseconds timeout(); public native ReduceOptions timeout(Milliseconds setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceScatterOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceScatterOptions.java new file mode 100644 index 00000000000..1219a135283 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReduceScatterOptions.java @@ -0,0 +1,43 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class ReduceScatterOptions extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public ReduceScatterOptions() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public ReduceScatterOptions(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public ReduceScatterOptions(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public ReduceScatterOptions position(long position) { + return (ReduceScatterOptions)super.position(position); + } + @Override public ReduceScatterOptions getPointer(long i) { + return new ReduceScatterOptions((Pointer)this).offsetAddress(i); + } + + public native @ByRef @NoOffset ReduceOp reduceOp(); public native ReduceScatterOptions reduceOp(ReduceOp setter); + public native @ByRef @NoOffset Milliseconds timeout(); public native ReduceScatterOptions timeout(Milliseconds setter); + public native @Cast("bool") @NoOffset boolean asyncOp(); public native ReduceScatterOptions asyncOp(boolean setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Reducer.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Reducer.java new file mode 100644 index 00000000000..fe1e0b83a18 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Reducer.java @@ -0,0 +1,186 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class Reducer extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public Reducer(Pointer p) { super(p); } + + // The constructor takes a list of variables (i.e. parameters) for this + // process's single model replica (as DDP assumes single-process + // single-device). The bucket assignment for this reducer, `bucket_indices`, + // is specified as a list of buckets, each of which is specified as a list of + // indices into the bucket's `variables` list. + public Reducer( + @ByVal TensorVector params, + @ByVal SizeTVectorVector bucket_indices, + @Cast("const std::vector*") @ByRef SizeTVector per_bucket_size_limits, + @IntrusivePtr("c10d::ProcessGroup") @Cast({"", "c10::intrusive_ptr&"}) ProcessGroup process_group, + @ByVal BoolVector expect_sparse_gradients, + @Cast("int64_t") long bucket_bytes_cap, + @Cast("bool") boolean find_unused_parameters, + @Cast("bool") boolean gradient_as_bucket_view, + @ByVal SizeTStringMap param_names, + @Cast("int64_t") long first_bucket_bytes_cap) { super((Pointer)null); allocate(params, bucket_indices, per_bucket_size_limits, process_group, expect_sparse_gradients, bucket_bytes_cap, find_unused_parameters, gradient_as_bucket_view, param_names, first_bucket_bytes_cap); } + private native void allocate( + @ByVal TensorVector params, + @ByVal SizeTVectorVector bucket_indices, + @Cast("const std::vector*") @ByRef SizeTVector per_bucket_size_limits, + @IntrusivePtr("c10d::ProcessGroup") @Cast({"", "c10::intrusive_ptr&"}) ProcessGroup process_group, + @ByVal BoolVector expect_sparse_gradients, + @Cast("int64_t") long bucket_bytes_cap, + @Cast("bool") boolean find_unused_parameters, + @Cast("bool") boolean gradient_as_bucket_view, + @ByVal SizeTStringMap param_names, + @Cast("int64_t") long first_bucket_bytes_cap); + + // To (re-)initialize bucket assignment, pass a list of buckets, each of + // which is specified by a list of indices in the bucket's `variables` list. + // This function performs validation that the variables within a bucket + // all live on the same device and have the same dimensionality. + public native void initialize_buckets(@ByVal SizeTVectorVector bucket_indices); + + public native void autograd_hook(@Cast("size_t") long index); + + // This function is called when the forward function has produced an output, + // and the user wishes to reduce gradients in the backwards pass. + // If they don't, and wish to accumulate gradients before reducing them, + // a call to this function can simply be omitted. + public native void prepare_for_backward(@Const @ByRef TensorVector outputs); + + // Called at the beginning of forward() inside DistributedDataParallel, + // right now it captures the starting time of forward in each iteration. + public native void prepare_for_forward(); + + // Returns the relative time in nanoseconds when gradients were ready, + // with respect to the time `prepare_for_backward` was called. The + // vector is for parameters for a single model replica. + public native @ByVal @Cast("std::vector*") LongVector get_backward_stats(); + + // Registers a hook to the reducer. The hook is `CommHookInterface` + // type to allow both Python and CPP hooks. This function can only + // be called once before calling backward. + // Cannot combine with the call of `register_builtin_comm_hook`. + public native void register_comm_hook(@UniquePtr CommHookInterface iface); + + // Registers a built-in C++ comm hook to the reducer. This function can only + // be called once before calling backward. + // Cannot combine with the call of `register_comm_hook`. + public native void register_builtin_comm_hook(BuiltinCommHookType comm_hook_type); + public native void register_builtin_comm_hook(@Cast("c10d::BuiltinCommHookType") byte comm_hook_type); + + // Informs reducer that optimizer is running in backward, so gradients + // don't need to be copied from buckets as the optimizer would've already + // been applied. + public native void set_optimizer_in_backward(); + + // Runs allreduce or installed communication hook given GradBucket instance. + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future run_comm_hook( + @ByRef GradBucket grad_bucket); + + // Runs default allreduce hook. + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future run_allreduce_hook( + @ByRef GradBucket grad_bucket); + + // Returns gradient buckets in sequential order of buckets_. This is the order + // in which buckets are reduced across processes. If return_zero_tensors=true, + // will return zero tensors of the same shape instead of the true tensors. + public native @StdVector GradBucket get_grad_buckets( + @Cast("bool") boolean return_zero_tensors/*=true*/); + public native @StdVector GradBucket get_grad_buckets(); + + // Rebuild buckets based on rebuilt_params_ and rebuilt_param_indices_ + // according to when tensors received grads in the backward pass. + // TODO this function makes broadcast communication call and + // could be overlapped with next forward() call, thus + // it could be async. Will make it async when rebuilding buckets for + // find_unused_parameters = true case, as we could rebuild buckets more than + // once for find_unused_parameters = true case, where subgraphs are trained + // and parameter indices order may change more frequently. + // For find_unused_parameters = false case, buckets are only rebuilt once, + // the performance cost is negligible. Returns true if the buckets were + // rebuilt. + public native @Cast("bool") boolean rebuild_buckets(); + + public native void setSparseMetadata(@ByRef StringTensorMap metadata); + + // Install futures that should be awaited at end of backwards. Currently these + // are only used by user-defined custom buffer reduction hooks, but can be + // generalized to any user-originating futures that need to be awaited. + public native void install_futures(@ByVal FutureList futs); + + // Returns true if we should rebuild buckets, else false. We only rebuild + // buckets once after the first iteration and never rebuild them if + // find_unused_parameters_. + public native @Cast("bool") boolean should_rebuild_buckets(); + + // Pushes all parameters to be rebuilt. + public native void push_rebuilt_params_for_all_indices(); + + // Creates and sets ForwardPassWorkHandle given a Work and the + // corresponding tensor being reduced. + public native void set_forward_pass_work_handle( + @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work forwardPassWorkHandle, + @Cast("bool") boolean useStaticWorldSize); + + // Retrieve on-device tensors used to track locally unused parameters. It is + // a tensor where index i = 1 if the Variable with that index has been used. + public native @ByVal Tensor get_local_used_map_on_device(); + + // An function for users to set sample_rate of collecting + // runtime stats. The time stats will be recorded for the + // first 10 iterations, after 10 iterations time stats will be + // recorded once every "sample_rate" training iterations. + public native void set_ddp_runtime_logging_sample_rate(int sample_rate); + + // Specify the training graph is static. + public native void set_static_graph(); + + // Delay all reduce to be after all gradients' calculation is complete. + public native void delay_all_reduce(); + + public native void set_mixed_precision_param_dtype(ScalarType dtype); + + // Weak reference to associated DDP logger. The reference is weak to avoid + // refcycle between reducer and logger. + public native void set_logger(@WeakPtr("c10d::Logger") @ByVal Logger logger); + + // When graph is not explicitly set by user as static and has unused + // parameters, this will return whether the graph has been static until the + // current iteration, which means unused params set has not changed. + public native @Cast("bool") boolean ddp_graph_static(); + + // Removes autograd hooks registered by the Reducer on the model parameters. + public native void remove_autograd_hooks(); + + // Checks whether or not the reducer has finalized the current backward + // iteration. + public native void check_finalized(); + + // Updates the underlying process group used by DDP with the new process + // group. + public native void update_process_group( + @IntrusivePtr("c10d::ProcessGroup") @Cast({"", "c10::intrusive_ptr&"}) ProcessGroup new_process_group); + + // Resets reducer state. + public native void reset_state(); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImpl.java index 270f2240625..f7135f3bb55 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies ReflectionPad over a 1-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ReflectionPad1d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.ReflectionPad1d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::ReflectionPad1dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImplBase.java index db124763911..e6ab50bd25d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImplCloneable.java index fa912b87928..d3404b69dc1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ReflectionPad1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dOptions.java index 625d1b1a865..9304a618b82 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImpl.java index ae1160fe8ae..d28d080f235 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies ReflectionPad over a 2-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ReflectionPad2d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.ReflectionPad2d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::ReflectionPad2dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImplBase.java index 898e7143f99..c4b2be98a35 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImplCloneable.java index 27f9c65e64b..68c7ea4604c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ReflectionPad2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dOptions.java index fe3298d8b54..f2c5dde89fe 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImpl.java index 7d5e9807ad8..11adca848ed 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies ReflectionPad over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ReflectionPad3d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.ReflectionPad3d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::ReflectionPad3dOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImplBase.java index fe6fb57060b..ba8f7811b63 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImplCloneable.java index e42b10962e3..f98ba50e909 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ReflectionPad3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dOptions.java index 7a24110f13c..2f1fe07ef9b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReflectionPad3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RegisterOperators.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RegisterOperators.java index 725c5b19fcd..3f0decde2d7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RegisterOperators.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RegisterOperators.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/RegistrationHandleRAII.java b/pytorch/src/gen/java/org/bytedeco/pytorch/RegistrationHandleRAII.java index b55abb4b3cd..92440a7bed6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/RegistrationHandleRAII.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/RegistrationHandleRAII.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImpl.java index 63fc3e48554..d7c85447207 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies ReplicationPad over a 1-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ReplicationPad1d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.ReplicationPad1d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::ReplicationPad1dOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImplBase.java index bc52ab36db2..92092e000a2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImplCloneable.java index f472e1a83d8..3114010887c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ReplicationPad1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dOptions.java index 5346f421828..353a2f0f216 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImpl.java index 30640e3f68f..a51935d49f4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies ReplicationPad over a 2-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ReplicationPad2d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.ReplicationPad2d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::ReplicationPad2dOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImplBase.java index d25e4f54982..73ea9f96f54 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImplCloneable.java index 2ba10fe5f5d..9adfa826174 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ReplicationPad2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dOptions.java index a741c82a25c..ebab478e57a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImpl.java index 360f499b0cc..da0c4d4325a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies ReplicationPad over a 3-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.ReplicationPad3d to + * See https://pytorch.org/docs/main/nn.html#torch.nn.ReplicationPad3d to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::ReplicationPad3dOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImplBase.java index 062cdc1f134..248f542b2e6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImplCloneable.java index 0e9beedb2b6..a98a2adecd0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ReplicationPad3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dOptions.java index 96968098c38..a2893757e23 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ReplicationPad3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Resolver.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Resolver.java index 3ee73638c20..d4b79ca4bcd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Resolver.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Resolver.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ResolverVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ResolverVector.java index 0207a91aa71..7a77f9bed35 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ResolverVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ResolverVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Result.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Result.java index e767b6e3fe0..9fa6e82e40c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Result.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Result.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Return.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Return.java index 6ba6c2019f7..42cd48a9ce3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Return.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Return.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Return extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Return(Pointer p) { super(p); } - public Return(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Return(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr expr(); public static native @ByVal Return create(@Const @ByRef SourceRange range, @Const @ByRef Expr value); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SELUImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SELUImpl.java index db2f93a4c7a..36cbc3009ad 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SELUImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SELUImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SELU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the selu function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.SELU to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.SELU to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::SELUOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SELUImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SELUImplCloneable.java index 548d481eb01..09427906ab1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SELUImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SELUImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class SELUImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SELUOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SELUOptions.java index be137b3e09c..b8b0694fe39 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SELUOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SELUOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SGD.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SGD.java index 3e3a716c706..ad5773eb520 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SGD.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SGD.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SGDOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SGDOptions.java index 9ea4c47e1b9..f8e79e35655 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SGDOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SGDOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace torch diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SGDParamState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SGDParamState.java index a505bf28feb..a1231cd09fb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SGDParamState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SGDParamState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyHandle.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyHandle.java index f32f01de6a9..983041409a5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyHandle.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyHandle.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyObject.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyObject.java index da81c93a472..9e7d278fede 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyObject.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyObject.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyObjectOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyObjectOptional.java index d768083f69d..66d36214001 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyObjectOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SafePyObjectOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SafePyObjectOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Sampler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Sampler.java index c8b54ed7856..9f3bfa416ca 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Sampler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Sampler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SavedTensorDefaultHooks.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SavedTensorDefaultHooks.java index f83fe7f1bf4..b7b9f8d1b0b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SavedTensorDefaultHooks.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SavedTensorDefaultHooks.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace impl diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SavedTensorDefaultHooksTLS.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SavedTensorDefaultHooksTLS.java index c7b05f12613..b95d1ed973c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SavedTensorDefaultHooksTLS.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SavedTensorDefaultHooksTLS.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SavedVariableHooks.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SavedVariableHooks.java index d2e499a431f..74d4fad4a44 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SavedVariableHooks.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SavedVariableHooks.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Scalar.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Scalar.java index ff1e5629149..7b6221bf1c9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Scalar.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Scalar.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarArrayRef.java index 2ef6a5b2833..83ef5718ee2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarOptional.java index 8f023768391..157bd77707b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ScalarOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeArrayRef.java index d6a59213441..468266c6b43 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeEnumerationType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeEnumerationType.java index 4fcf4aabcd0..4e51fd235aa 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeEnumerationType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeEnumerationType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeOptional.java index e3983376987..a6301cd5f82 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ScalarTypeOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeType.java index 5d433664e07..998a2128bee 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeTypePtr.java index 5469c18274f..0509f602a49 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeVector.java index ad0b5fd656d..e346a579d7b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ScalarTypeVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ScatterOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ScatterOptions.java new file mode 100644 index 00000000000..9a051f1ae53 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ScatterOptions.java @@ -0,0 +1,43 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class ScatterOptions extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public ScatterOptions() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public ScatterOptions(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public ScatterOptions(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public ScatterOptions position(long position) { + return (ScatterOptions)super.position(position); + } + @Override public ScatterOptions getPointer(long i) { + return new ScatterOptions((Pointer)this).offsetAddress(i); + } + + public native @Cast("int64_t") long rootRank(); public native ScatterOptions rootRank(long setter); + public native @ByRef Milliseconds timeout(); public native ScatterOptions timeout(Milliseconds setter); + public native @Cast("bool") boolean asyncOp(); public native ScatterOptions asyncOp(boolean setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SchemaArgument.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SchemaArgument.java index 2f37d5b84d5..08e2afce904 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SchemaArgument.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SchemaArgument.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SchemaInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SchemaInfo.java index c5bfa2631e5..76e4ae652d8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SchemaInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SchemaInfo.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Scope.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Scope.java index 2c67df9774f..d43b27c9e74 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Scope.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Scope.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ScopeOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ScopeOptional.java index 4d679bc4ca2..b5556ed50cb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ScopeOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ScopeOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ScopeOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ScriptTypeParser.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ScriptTypeParser.java index ef6e282f18e..986a1a4a3b3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ScriptTypeParser.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ScriptTypeParser.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -46,7 +47,7 @@ public class ScriptTypeParser extends Pointer { public native @ByVal Type.TypePtr parseTypeFromExpr(@Const @ByRef Expr expr); - public native @ByVal @Cast("c10::optional >*") T_TypePtrLong_TOptional parseBroadcastList( + public native @ByVal @Cast("std::optional >*") T_TypePtrLong_TOptional parseBroadcastList( @Const @ByRef Expr expr); public native @ByVal Type.TypePtr parseType(@StdString BytePointer str); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Select.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Select.java index ee3ea869004..f20f0a817b9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Select.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Select.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Select extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Select(Pointer p) { super(p); } - public Select(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Select(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr value(); public native @ByVal Ident selector(); public static native @ByVal Select create( diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Self.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Self.java index 71af41a423b..7c2a24257b1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Self.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Self.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialImpl.java index 0dd66bed09f..536d39cb974 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -111,7 +112,7 @@ public SequentialImpl( /** Special cloning function for {@code Sequential} because it does not use * {@code reset()}. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); /** {@code reset()} is empty for {@code Sequential}, since it does not have parameters of @@ -163,9 +164,9 @@ public SequentialImpl( public native @ByVal Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4); public native @ByVal Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6); public native @ByVal Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6, @Const @ByRef Tensor input7, @Const @ByRef Tensor input8); - public native @ByVal Tensor forward(@Const @ByRef Tensor input, @ByRef(nullValue = "c10::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); - public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") LongArrayRefOptional output_size); - public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef Tensor indices, @Const @ByRef(nullValue = "c10::optional >(c10::nullopt)") LongVectorOptional output_size); + public native @ByVal Tensor forward(@Const @ByRef Tensor input, @ByRef(nullValue = "std::optional(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); + public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef(nullValue = "std::optional(c10::nullopt)") LongArrayRefOptional output_size); + public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef Tensor indices, @Const @ByRef(nullValue = "std::optional >(c10::nullopt)") LongVectorOptional output_size); public native @ByVal @Name("forward>>") T_TensorT_TensorTensor_T_T forwardT_TensorT_TensorTensor_T_T(@Const @ByRef Tensor input); public native @ByVal @Name("forward>>") T_TensorT_TensorTensor_T_T forwardT_TensorT_TensorTensor_T_T(@Const @ByRef Tensor input, @ByVal(nullValue = "torch::optional >{}") T_TensorTensor_TOptional hx_opt); public native @ByVal @Name("forward>") T_TensorTensor_T forwardT_TensorTensor_T(@Const @ByRef Tensor input); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialImplCloneable.java index f87ed1c64d5..ea1fd2e8c4a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class SequentialImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialSampler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialSampler.java index ca93ad89981..5f025a4ccdf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialSampler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SequentialSampler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -31,7 +32,7 @@ public class SequentialSampler extends Sampler { private native void allocate(@Cast("size_t") long size); /** Resets the {@code SequentialSampler} to zero. */ - public native void reset(@ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional new_size); + public native void reset(@ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional new_size); public native void reset(); /** Returns the next batch of indices. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbol.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbol.java index 91687445d26..c30e3acfb8c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbol.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbol.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbolVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbolVector.java index 6bdaafcba7d..d9717926ad9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbolVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbolVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbolVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbolVectorOptional.java index de110f1f7d9..e1a22884ae7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbolVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ShapeSymbolVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ShapeSymbolVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SharedClassTypeVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SharedClassTypeVector.java index 8da367f19d3..d6f2e45afd6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SharedClassTypeVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SharedClassTypeVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SharedModuleVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SharedModuleVector.java index 21d6dc1b331..0f048bcba0c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SharedModuleVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SharedModuleVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SharedParserData.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SharedParserData.java index 7736cac8459..4863521207d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SharedParserData.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SharedParserData.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SharedSugaredValueVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SharedSugaredValueVector.java index a665207a62a..4798ca00c49 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SharedSugaredValueVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SharedSugaredValueVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SharedType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SharedType.java index 95cfda501a6..6b59f2e17bc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SharedType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SharedType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ShortArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ShortArrayRef.java index 882831086bc..c39368e619d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ShortArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ShortArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ShortSet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ShortSet.java new file mode 100644 index 00000000000..8f2b4705c8b --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ShortSet.java @@ -0,0 +1,47 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("std::unordered_set") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class ShortSet extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public ShortSet(Pointer p) { super(p); } + public ShortSet() { allocate(); } + private native void allocate(); + public native @Name("operator =") @ByRef ShortSet put(@ByRef ShortSet x); + + public boolean empty() { return size() == 0; } + public native long size(); + + public short front() { try (Iterator it = begin()) { return it.get(); } } + public native void insert(short value); + public native void erase(short value); + public native @ByVal Iterator begin(); + public native @ByVal Iterator end(); + @NoOffset @Name("iterator") public static class Iterator extends Pointer { + public Iterator(Pointer p) { super(p); } + public Iterator() { } + + public native @Name("operator ++") @ByRef Iterator increment(); + public native @Name("operator ==") boolean equals(@ByRef Iterator it); + public native @Name("operator *") short get(); + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SiLUImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SiLUImpl.java index f4043e63c10..d838a70b19b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SiLUImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SiLUImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SiLU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies silu over a given input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.SiLU to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.SiLU to learn * about the exact behavior of this module. */ @Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SiLUImpl extends SiLUImplCloneable { diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SiLUImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SiLUImplCloneable.java index 6f1b7102450..913af82d69d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SiLUImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SiLUImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class SiLUImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SigmoidImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SigmoidImpl.java index 4499765c75e..91ce6731c9a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SigmoidImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SigmoidImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Sigmoid ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies sigmoid over a given input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Sigmoid to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Sigmoid to learn * about the exact behavior of this module. */ @Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SigmoidImpl extends SigmoidImplCloneable { diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SigmoidImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SigmoidImplCloneable.java index 5bdcf66cf60..737a22793c0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SigmoidImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SigmoidImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class SigmoidImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SimpleSelf.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SimpleSelf.java index fcd2a016189..1db226662d3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SimpleSelf.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SimpleSelf.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SimpleValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SimpleValue.java index aaf5530d33d..3f7a9f1f4c4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SimpleValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SimpleValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -33,7 +34,7 @@ public class SimpleValue extends SugaredValue { public native @ByVal SharedSugaredValueVector asTuple( @Const @ByRef SourceRange loc, @ByRef GraphFunction m, - @Const @ByRef(nullValue = "c10::optional{}") SizeTOptional size_hint); + @Const @ByRef(nullValue = "std::optional{}") SizeTOptional size_hint); public native @ByVal SharedSugaredValueVector asTuple( @Const @ByRef SourceRange loc, @ByRef GraphFunction m); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SingletonTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SingletonTypePtr.java index 0a01e31b87e..2358a117666 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SingletonTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SingletonTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeInput.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeInput.java new file mode 100644 index 00000000000..1e9c4ebed32 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeInput.java @@ -0,0 +1,43 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("torch::dynamo::autograd") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class SizeInput extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public SizeInput(Pointer p) { super(p); } + + // Note: int value is still needed when dynamic to pass as an arg + public enum DynType { STATIC((byte)(0)), DYNAMIC((byte)(1)); + + public final byte value; + private DynType(byte v) { this.value = v; } + private DynType(DynType e) { this.value = e.value; } + public DynType intern() { for (DynType e : values()) if (e.value == value) return e; return this; } + @Override public String toString() { return intern().name(); } + } + public SizeInput(DynType dt, @Cast("int64_t") long v) { super((Pointer)null); allocate(dt, v); } + private native void allocate(DynType dt, @Cast("int64_t") long v); + public SizeInput(@Cast("torch::dynamo::autograd::SizeInput::DynType") byte dt, @Cast("int64_t") long v) { super((Pointer)null); allocate(dt, v); } + private native void allocate(@Cast("torch::dynamo::autograd::SizeInput::DynType") byte dt, @Cast("int64_t") long v); + public native DynType dyn_type(); public native SizeInput dyn_type(DynType setter); + public native @Cast("int64_t") long value(); public native SizeInput value(long setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTArrayRef.java index b10768b2f78..cdfa9ac6949 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTMatchedSchemaPair.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTMatchedSchemaPair.java index c20957698ca..0028c4745d5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTMatchedSchemaPair.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTMatchedSchemaPair.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTOptional.java index 9cff7db01a0..2ad4ffe6c97 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SizeTOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTStringMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTStringMap.java new file mode 100644 index 00000000000..6e81b9754d3 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTStringMap.java @@ -0,0 +1,52 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("std::unordered_map") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class SizeTStringMap extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public SizeTStringMap(Pointer p) { super(p); } + public SizeTStringMap() { allocate(); } + private native void allocate(); + public native @Name("operator =") @ByRef SizeTStringMap put(@ByRef SizeTStringMap x); + + public boolean empty() { return size() == 0; } + public native long size(); + + public BytePointer front() { return get(0); } + public BytePointer back() { return get(size() - 1); } + @Index public native @StdString BytePointer get(@Cast("size_t") long i); + public native SizeTStringMap put(@Cast("size_t") long i, BytePointer value); + @ValueSetter @Index public native SizeTStringMap put(@Cast("size_t") long i, @StdString String value); + + public native void erase(@ByVal Iterator pos); + public native @ByVal Iterator begin(); + public native @ByVal Iterator end(); + @NoOffset @Name("iterator") public static class Iterator extends Pointer { + public Iterator(Pointer p) { super(p); } + public Iterator() { } + + public native @Name("operator ++") @ByRef Iterator increment(); + public native @Name("operator ==") boolean equals(@ByRef Iterator it); + public native @Name("operator *().first") @MemberGetter @Cast("size_t") long first(); + public native @Name("operator *().second") @MemberGetter @StdString BytePointer second(); + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTVector.java index 6ff996f248f..86a57844bff 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTVectorOptional.java index f139bd40109..f91766aa5ad 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SizeTVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTVectorVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTVectorVector.java new file mode 100644 index 00000000000..8e85215b499 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SizeTVectorVector.java @@ -0,0 +1,91 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class SizeTVectorVector extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public SizeTVectorVector(Pointer p) { super(p); } + public SizeTVectorVector(SizeTVector value) { this(1); put(0, value); } + public SizeTVectorVector(SizeTVector ... array) { this(array.length); put(array); } + public SizeTVectorVector() { allocate(); } + public SizeTVectorVector(long n) { allocate(n); } + private native void allocate(); + private native void allocate(@Cast("size_t") long n); + public native @Name("operator =") @ByRef SizeTVectorVector put(@ByRef SizeTVectorVector x); + + public boolean empty() { return size() == 0; } + public native long size(); + public void clear() { resize(0); } + public native void resize(@Cast("size_t") long n); + + public SizeTVector front() { return get(0); } + public SizeTVector back() { return get(size() - 1); } + @Index(function = "at") public native @Cast("std::vector*") @ByRef SizeTVector get(@Cast("size_t") long i); + public native SizeTVectorVector put(@Cast("size_t") long i, SizeTVector value); + + public native @ByVal Iterator insert(@ByVal Iterator pos, @Cast("std::vector*") @ByRef SizeTVector value); + public native @ByVal Iterator erase(@ByVal Iterator pos); + public native @ByVal Iterator begin(); + public native @ByVal Iterator end(); + @NoOffset @Name("iterator") public static class Iterator extends Pointer { + public Iterator(Pointer p) { super(p); } + public Iterator() { } + + public native @Name("operator ++") @ByRef Iterator increment(); + public native @Name("operator ==") boolean equals(@ByRef Iterator it); + public native @Name("operator *") @Cast("std::vector*") @ByRef @Const SizeTVector get(); + } + + public SizeTVector[] get() { + SizeTVector[] array = new SizeTVector[size() < Integer.MAX_VALUE ? (int)size() : Integer.MAX_VALUE]; + for (int i = 0; i < array.length; i++) { + array[i] = get(i); + } + return array; + } + @Override public String toString() { + return java.util.Arrays.toString(get()); + } + + public SizeTVector pop_back() { + long size = size(); + SizeTVector value = get(size - 1); + resize(size - 1); + return value; + } + public SizeTVectorVector push_back(SizeTVector value) { + long size = size(); + resize(size + 1); + return put(size, value); + } + public SizeTVectorVector put(SizeTVector value) { + if (size() != 1) { resize(1); } + return put(0, value); + } + public SizeTVectorVector put(SizeTVector ... array) { + if (size() != array.length) { resize(array.length); } + for (int i = 0; i < array.length; i++) { + put(i, array[i]); + } + return this; + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SizesAndStrides.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SizesAndStrides.java index 41e93f17f5b..60f5606821b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SizesAndStrides.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SizesAndStrides.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Slice.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Slice.java index 3d17fbff75b..2f3d17dd305 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Slice.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Slice.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -34,13 +35,13 @@ public class Slice extends Pointer { } public Slice( - @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional start_index, - @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional stop_index, - @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional step_index) { super((Pointer)null); allocate(start_index, stop_index, step_index); } + @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional start_index, + @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional stop_index, + @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional step_index) { super((Pointer)null); allocate(start_index, stop_index, step_index); } private native void allocate( - @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional start_index, - @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional stop_index, - @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional step_index); + @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional start_index, + @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional stop_index, + @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional step_index); public Slice() { super((Pointer)null); allocate(); } private native void allocate(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SliceExpr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SliceExpr.java index 6ac128d68ff..bcb839515ac 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SliceExpr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SliceExpr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class SliceExpr extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public SliceExpr(Pointer p) { super(p); } - public SliceExpr(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public SliceExpr(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal ExprMaybe start(); public native @ByVal ExprMaybe end(); public native @ByVal ExprMaybe step(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SliceValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SliceValue.java index 7a0ac49bdec..8c785e3e13c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SliceValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SliceValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SlotCursor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SlotCursor.java index 4434397bb6a..28f91c47949 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SlotCursor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SlotCursor.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossImpl.java index f1248ab6a61..10d4d6dcb8b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,7 +26,7 @@ * element-wise error falls below beta and an L1 term otherwise. * It is less sensitive to outliers than the {@code MSELoss} and in some cases * prevents exploding gradients (e.g. see the paper {@code Fast R-CNN} by Ross - * Girshick). See https://pytorch.org/docs/master/nn.html#torch.nn.SmoothL1Loss + * Girshick). See https://pytorch.org/docs/main/nn.html#torch.nn.SmoothL1Loss * to learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::SmoothL1LossOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossImplCloneable.java index 1a110aeb13f..c5cdb423dcd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class SmoothL1LossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossOptions.java index d346352ca1e..e8662fe4443 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SmoothL1LossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossImpl.java index 946407d682e..49f11259f03 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ /** Creates a criterion that optimizes a two-class classification * logistic loss between input tensor :math:{@code x} and target tensor :math:{@code y} * (containing 1 or -1). - * See https://pytorch.org/docs/master/nn.html#torch.nn.SoftMarginLoss to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.SoftMarginLoss to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::SoftMarginLossOptions} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossImplCloneable.java index b0ca1eb8581..4448b8bef90 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class SoftMarginLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossOptions.java index ca65ac42fdc..1a0ecff27e4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftMarginLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Softmax2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Softmax2dImpl.java index 69a87bcb6e5..af7d9d85277 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Softmax2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Softmax2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Softmax2d ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the Softmax2d function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Softmax2d to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Softmax2d to learn * about the exact behavior of this module. */ @Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class Softmax2dImpl extends Softmax2dImplCloneable { diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Softmax2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Softmax2dImplCloneable.java index 348109edafb..66c8fda041d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Softmax2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Softmax2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class Softmax2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxFuncOptions.java index 1e022aab676..20f8a99d86d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxImpl.java index 81e6488b75e..86e39e4b55d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Softmax ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the Softmax function. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Softmax to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Softmax to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::SoftmaxOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxImplCloneable.java index 09ec3d76ded..1cd32a3be62 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class SoftmaxImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxOptions.java index 7043899e71b..718b4907fe4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftmaxOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminFuncOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminFuncOptions.java index 30a71102077..568cb22b591 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminFuncOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminFuncOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminImpl.java index 588200ee4a7..bf544b9edcc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Softmin ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the Softmin function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Softmin to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Softmin to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::SoftminOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminImplCloneable.java index 9e001c47610..edc274ff3b7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class SoftminImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminOptions.java index f3ff8025ba4..dd868aa875b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftminOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusImpl.java index d862f321ded..ef053c0f73e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Softplus ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies softplus over a given input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Softplus to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Softplus to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::SoftplusOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusImplCloneable.java index 14e360fd8d1..75bed7ee51f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class SoftplusImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusOptions.java index c7d54409363..6b02f531511 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftplusOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkImpl.java index 916f0f2470d..4c72f586c17 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Softshrink ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the soft shrinkage function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Softshrink to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Softshrink to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::SoftshrinkOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkImplCloneable.java index d56e6f52bfb..bdd2f5e5af2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class SoftshrinkImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkOptions.java index 8906bac6358..02833c2e22b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftshrinkOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftsignImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftsignImpl.java index 21821550f3a..b4c6d6ae955 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftsignImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftsignImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Softsign ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies Softsign over a given input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Softsign to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Softsign to learn * about the exact behavior of this module. */ @Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SoftsignImpl extends SoftsignImplCloneable { diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftsignImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftsignImplCloneable.java index 50f209625d9..71e223419ef 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SoftsignImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SoftsignImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class SoftsignImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Source.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Source.java index 116eebdd414..fe1c73ab39a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Source.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Source.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -42,13 +43,13 @@ public enum CopiesString { COPIES_STRING(0), DONT_COPY(1); public Source( @StringView BytePointer text_view, - @ByVal(nullValue = "c10::optional(c10::nullopt)") StringOptional filename, + @ByVal(nullValue = "std::optional(c10::nullopt)") StringOptional filename, @Cast("size_t") long starting_line_no/*=0*/, @SharedPtr SourceRangeUnpickler gen_ranges/*=nullptr*/, CopiesString copies_str/*=torch::jit::Source::COPIES_STRING*/) { super((Pointer)null); allocate(text_view, filename, starting_line_no, gen_ranges, copies_str); } private native void allocate( @StringView BytePointer text_view, - @ByVal(nullValue = "c10::optional(c10::nullopt)") StringOptional filename, + @ByVal(nullValue = "std::optional(c10::nullopt)") StringOptional filename, @Cast("size_t") long starting_line_no/*=0*/, @SharedPtr SourceRangeUnpickler gen_ranges/*=nullptr*/, CopiesString copies_str/*=torch::jit::Source::COPIES_STRING*/); @@ -58,13 +59,13 @@ private native void allocate( @StringView BytePointer text_view); public Source( @StringView String text_view, - @ByVal(nullValue = "c10::optional(c10::nullopt)") StringOptional filename, + @ByVal(nullValue = "std::optional(c10::nullopt)") StringOptional filename, @Cast("size_t") long starting_line_no/*=0*/, @SharedPtr SourceRangeUnpickler gen_ranges/*=nullptr*/, @Cast("torch::jit::Source::CopiesString") int copies_str/*=torch::jit::Source::COPIES_STRING*/) { super((Pointer)null); allocate(text_view, filename, starting_line_no, gen_ranges, copies_str); } private native void allocate( @StringView String text_view, - @ByVal(nullValue = "c10::optional(c10::nullopt)") StringOptional filename, + @ByVal(nullValue = "std::optional(c10::nullopt)") StringOptional filename, @Cast("size_t") long starting_line_no/*=0*/, @SharedPtr SourceRangeUnpickler gen_ranges/*=nullptr*/, @Cast("torch::jit::Source::CopiesString") int copies_str/*=torch::jit::Source::COPIES_STRING*/); @@ -75,12 +76,12 @@ private native void allocate( public Source( @ByVal StringCordView str, - @ByVal(nullValue = "c10::optional(c10::nullopt)") StringOptional filename, + @ByVal(nullValue = "std::optional(c10::nullopt)") StringOptional filename, @Cast("size_t") long starting_line_no/*=0*/, @SharedPtr SourceRangeUnpickler gen_ranges/*=nullptr*/) { super((Pointer)null); allocate(str, filename, starting_line_no, gen_ranges); } private native void allocate( @ByVal StringCordView str, - @ByVal(nullValue = "c10::optional(c10::nullopt)") StringOptional filename, + @ByVal(nullValue = "std::optional(c10::nullopt)") StringOptional filename, @Cast("size_t") long starting_line_no/*=0*/, @SharedPtr SourceRangeUnpickler gen_ranges/*=nullptr*/); public Source( diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SourceLocation.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SourceLocation.java index e870bdc2ae4..4f26a1e7d69 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SourceLocation.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SourceLocation.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRange.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRange.java index 95bcd6b0fd2..f311f333718 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRange.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRange.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeHasher.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeHasher.java index 9f47c225ba4..09401f403e4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeHasher.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeHasher.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeOptional.java index 5d868443ed6..f33802ea16c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SourceRangeOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeUnpickler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeUnpickler.java index f05f7e61c7d..9555bc21626 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeUnpickler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SourceRangeUnpickler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SpecialFormValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SpecialFormValue.java index ab126928b51..cd054c97058 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SpecialFormValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SpecialFormValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SplitUntil32Bit.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SplitUntil32Bit.java index 0ef26c3ee96..59b25304296 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SplitUntil32Bit.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SplitUntil32Bit.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StackEntry.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StackEntry.java index 3f293ead7a6..49ecc29bd75 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StackEntry.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StackEntry.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Starred.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Starred.java index c6bcb28286b..3280065ddf8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Starred.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Starred.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Starred extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Starred(Pointer p) { super(p); } - public Starred(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Starred(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr expr(); public static native @ByVal Starred create(@Const @ByRef SourceRange range, @Const @ByRef Expr expr); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StashTorchDispatchModeGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StashTorchDispatchModeGuard.java new file mode 100644 index 00000000000..5b5fde5a728 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StashTorchDispatchModeGuard.java @@ -0,0 +1,41 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("torch::torch_dispatch_mode") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class StashTorchDispatchModeGuard extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public StashTorchDispatchModeGuard(Pointer p) { super(p); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public StashTorchDispatchModeGuard(long size) { super((Pointer)null); allocateArray(size); } + private native void allocateArray(long size); + @Override public StashTorchDispatchModeGuard position(long position) { + return (StashTorchDispatchModeGuard)super.position(position); + } + @Override public StashTorchDispatchModeGuard getPointer(long i) { + return new StashTorchDispatchModeGuard((Pointer)this).offsetAddress(i); + } + + public StashTorchDispatchModeGuard() { super((Pointer)null); allocate(); } + private native void allocate(); + + public native @Const @SharedPtr("c10::impl::PyObject_TorchDispatchMode") @ByRef PyObject_TorchDispatchMode get_cur_mode(); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StashTorchDispatchStackGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StashTorchDispatchStackGuard.java new file mode 100644 index 00000000000..dfa7cb3b373 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StashTorchDispatchStackGuard.java @@ -0,0 +1,39 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("torch::torch_dispatch_mode") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class StashTorchDispatchStackGuard extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public StashTorchDispatchStackGuard(Pointer p) { super(p); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public StashTorchDispatchStackGuard(long size) { super((Pointer)null); allocateArray(size); } + private native void allocateArray(long size); + @Override public StashTorchDispatchStackGuard position(long position) { + return (StashTorchDispatchStackGuard)super.position(position); + } + @Override public StashTorchDispatchStackGuard getPointer(long i) { + return new StashTorchDispatchStackGuard((Pointer)this).offsetAddress(i); + } + + public StashTorchDispatchStackGuard() { super((Pointer)null); allocate(); } + private native void allocate(); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StepLR.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StepLR.java index b840224646c..8f16cdbc596 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StepLR.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StepLR.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Stmt.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Stmt.java index e172113c7c6..1ba3f1b0270 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Stmt.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Stmt.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -28,6 +29,6 @@ public class Stmt extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Stmt(Pointer p) { super(p); } - public Stmt(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Stmt(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StmtList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StmtList.java index ebceea7551d..cc6c1488116 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StmtList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StmtList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class StmtList extends TreeView { public StmtList(Pointer p) { super(p); } - public StmtList(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public StmtList(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal @Cast("torch::jit::List::iterator*") StmtListIterator begin(); public native @ByVal @Cast("torch::jit::List::iterator*") StmtListIterator end(); public native @Cast("bool") boolean empty(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StmtListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StmtListIterator.java index fee641ff6fa..c508484805b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StmtListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StmtListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class StmtListIterator extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public StmtListIterator(Pointer p) { super(p); } - public StmtListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it) { super((Pointer)null); allocate(it); } - private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it); + public StmtListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it) { super((Pointer)null); allocate(it); } + private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it); public native @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef StmtListIterator rhs); public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef StmtListIterator rhs); public native @ByVal @Name("operator *") Stmt multiply(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Storage.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Storage.java index 8bdd697a348..3dd78c7e529 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Storage.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Storage.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -59,8 +60,8 @@ public static class unsafe_borrow_t extends Pointer { public Storage() { super((Pointer)null); allocate(); } private native void allocate(); - public Storage(@ByVal StorageImplPtr ptr) { super((Pointer)null); allocate(ptr); } - private native void allocate(@ByVal StorageImplPtr ptr); + public Storage(@IntrusivePtr("c10::StorageImpl") @Cast({"", "c10::intrusive_ptr&"}) StorageImpl ptr) { super((Pointer)null); allocate(ptr); } + private native void allocate(@IntrusivePtr("c10::StorageImpl") @Cast({"", "c10::intrusive_ptr&"}) StorageImpl ptr); // Allocates memory buffer using given allocator and creates a storage with it public Storage( @@ -146,7 +147,7 @@ private native void allocate( public native @NoException(true) StorageImpl unsafeGetStorageImpl(); - public native @ByVal WeakStorage getWeakStorageImpl(); + public native @IntrusivePtr("c10::StorageImpl") @Cast({"", "c10::intrusive_ptr&"}) StorageImpl getWeakStorageImpl(); public native @Cast("bool") @Name("operator bool") boolean asBoolean(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StorageImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StorageImpl.java index 08dcc314221..0a2d5ee41ee 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StorageImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StorageImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -60,7 +61,7 @@ public StorageImpl( @StdMove DataPtr data_ptr, Allocator allocator, @Cast("bool") boolean resizable) { super((Pointer)null); allocate(arg0, size_bytes, data_ptr, allocator, resizable); } - private native void allocate( + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( @ByVal use_byte_size_t arg0, @ByVal SymInt size_bytes, @StdMove DataPtr data_ptr, @@ -72,7 +73,7 @@ public StorageImpl( @Const @ByRef SymInt size_bytes, Allocator allocator, @Cast("bool") boolean resizable) { super((Pointer)null); allocate(arg0, size_bytes, allocator, resizable); } - private native void allocate( + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( @ByVal use_byte_size_t arg0, @Const @ByRef SymInt size_bytes, Allocator allocator, @@ -101,9 +102,12 @@ private native void allocate( public native @Cast("bool") boolean resizable(); + public native @StdMove DataPtr data_ptr(); + public native @ByRef DataPtr mutable_data_ptr(); - public native @StdMove DataPtr data_ptr(); + // Returns the data_ptr. Bypasses all checks. + public native @ByRef DataPtr _mutable_data_ptr_no_checks(); // Returns the previous data_ptr public native @StdMove DataPtr set_data_ptr(@StdMove DataPtr data_ptr); @@ -151,4 +155,8 @@ public native void UniqueStorageShareExternalPointer( public native @Cast("bool") boolean received_cuda(); public native @Cast("c10::impl::PyObjectSlot*") Pointer pyobj_slot(); + + public native void set_throw_on_mutable_data_ptr(); + + public native void set_warn_deprecated_on_mutable_data_ptr(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StorageImplPtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StorageImplPtr.java deleted file mode 100644 index 777055f6ad6..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StorageImplPtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class StorageImplPtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public StorageImplPtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public StorageImplPtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public StorageImplPtr position(long position) { - return (StorageImplPtr)super.position(position); - } - @Override public StorageImplPtr getPointer(long i) { - return new StorageImplPtr((Pointer)this).offsetAddress(i); - } - - - public StorageImplPtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public StorageImplPtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public StorageImplPtr(StorageImpl target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(StorageImpl target, @ByVal DontIncreaseRefcount arg1); - - - - public StorageImplPtr(@ByRef(true) StorageImplPtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) StorageImplPtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) StorageImplPtr put(@ByRef(true) StorageImplPtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) StorageImpl get(); - - public native @ByRef @Name("operator *") @NoException(true) StorageImpl multiply(); - - public native @Name("operator ->") @NoException(true) StorageImpl access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef StorageImplPtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) StorageImpl release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal StorageImplPtr reclaim(StorageImpl owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal StorageImplPtr reclaim_copy(StorageImpl owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal StorageImplPtr unsafe_steal_from_new(StorageImpl raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal StorageImplPtr unsafe_adapt_non_heap_allocated( - StorageImpl raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal StorageImplPtr unsafe_reclaim_from_nonowning(StorageImpl raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StorageType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StorageType.java index 7ccd9fcc4b6..2a701daf3a5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StorageType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StorageType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,7 +26,7 @@ public class StorageType extends Type { public native @Cast("bool") boolean equals(@Const @ByRef Type rhs); public native @StdString BytePointer str(); - public native @StdString BytePointer annotation_str_impl(@ByVal(nullValue = "c10::TypePrinter(nullptr)") TypePrinter printer); + public native @StdString BytePointer annotation_str_impl(@Const @ByRef(nullValue = "c10::TypePrinter(nullptr)") TypePrinter printer); public native @StdString BytePointer annotation_str_impl(); @MemberGetter public static native TypeKind Kind(); // global singleton diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StorageTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StorageTypePtr.java index 5c50f023a2a..a5975211d5a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StorageTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StorageTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Store.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Store.java new file mode 100644 index 00000000000..2a1595783b0 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Store.java @@ -0,0 +1,104 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class Store extends CustomClassHolder { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public Store(Pointer p) { super(p); } + + @MemberGetter public static native @Const @ByRef Milliseconds kDefaultTimeout(); + @MemberGetter public static native @Const @ByRef Milliseconds kNoTimeout(); + + public native void set(@StdString BytePointer key, @StdString BytePointer value); + public native void set(@StdString String key, @StdString String value); + + public native void set( + @StdString BytePointer key, + @Cast("const std::vector*") @ByRef ByteVector value); + public native void set( + @StdString String key, + @Cast("const std::vector*") @ByRef ByteVector value); + + public native @StdString BytePointer compareSet( + @StdString BytePointer key, + @StdString BytePointer currentValue, + @StdString BytePointer newValue); + public native @StdString String compareSet( + @StdString String key, + @StdString String currentValue, + @StdString String newValue); + + public native @ByVal @Cast("std::vector*") ByteVector compareSet( + @StdString BytePointer key, + @Cast("const std::vector*") @ByRef ByteVector currentValue, + @Cast("const std::vector*") @ByRef ByteVector newValue); + public native @ByVal @Cast("std::vector*") ByteVector compareSet( + @StdString String key, + @Cast("const std::vector*") @ByRef ByteVector currentValue, + @Cast("const std::vector*") @ByRef ByteVector newValue); + + public native @StdString BytePointer get_to_str(@StdString BytePointer key); + public native @StdString String get_to_str(@StdString String key); + + public native @ByVal @Cast("std::vector*") ByteVector get(@StdString BytePointer key); + public native @ByVal @Cast("std::vector*") ByteVector get(@StdString String key); + + public native @Cast("int64_t") long add(@StdString BytePointer key, @Cast("int64_t") long value); + public native @Cast("int64_t") long add(@StdString String key, @Cast("int64_t") long value); + + public native @Cast("bool") boolean deleteKey(@StdString BytePointer key); + public native @Cast("bool") boolean deleteKey(@StdString String key); + + public native @Cast("bool") boolean check(@Const @ByRef StringVector keys); + + public native @Cast("int64_t") long getNumKeys(); + + public native @Name("wait") void _wait(@Const @ByRef StringVector keys); + + public native @Name("wait") void _wait( + @Const @ByRef StringVector keys, + @Const @ByRef Milliseconds timeout); + + public native @Const @ByRef @NoException(true) Milliseconds getTimeout(); + + public native void setTimeout(@Const @ByRef Milliseconds timeout); + + // watchKey() is deprecated and no longer supported. + + + public native void append( + @StdString BytePointer key, + @Cast("const std::vector*") @ByRef ByteVector value); + public native void append( + @StdString String key, + @Cast("const std::vector*") @ByRef ByteVector value); + + public native @Cast("std::vector*") @StdVector ByteVector multiGet( + @Const @ByRef StringVector keys); + + public native void multiSet( + @Const @ByRef StringVector keys, + @Cast("std::vector*") @StdVector ByteVector values); + + // Returns true if this store support append, multiGet and multiSet + public native @Cast("bool") boolean hasExtendedApi(); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StoreTimeoutGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StoreTimeoutGuard.java new file mode 100644 index 00000000000..d99100be674 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StoreTimeoutGuard.java @@ -0,0 +1,44 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +/* +StoreTimeoutGuard is a RAII guard that will set the store timeout and restore it +when it returns. +*/ +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class StoreTimeoutGuard extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public StoreTimeoutGuard(Pointer p) { super(p); } + + public StoreTimeoutGuard( + @ByRef Store store, + @Const @ByRef Milliseconds timeout) { super((Pointer)null); allocate(store, timeout); } + private native void allocate( + @ByRef Store store, + @Const @ByRef Milliseconds timeout); + + /* Disabling copy and move semantics */ + + + + +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Stream.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Stream.java index 9f95e7ed7d6..6314ef88d90 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Stream.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Stream.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StreamData3.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StreamData3.java index a3683598e80..272b19a7126 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StreamData3.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StreamData3.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StreamObjType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StreamObjType.java index 7eba755468e..a3c8cbe7a05 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StreamObjType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StreamObjType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StreamObjTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StreamObjTypePtr.java index b4f3a8cdae6..25ae9cff3da 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StreamObjTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StreamObjTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StreamOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StreamOptional.java index 90573c569f4..2387b049291 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StreamOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StreamOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class StreamOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StreamSampler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StreamSampler.java index f064a1dd339..d3a7dd283cd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StreamSampler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StreamSampler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -35,7 +36,7 @@ public class StreamSampler extends BatchSizeSampler { private native void allocate(@Cast("size_t") long epoch_size); /** Resets the internal state of the sampler. */ - public native void reset(@ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional new_size); + public native void reset(@ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional new_size); public native void reset(); /** Returns a {@code BatchSize} object with the number of elements to fetch in the diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Stride.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Stride.java index d8c1bc41e7c..4c79ff69448 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Stride.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Stride.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StrideArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StrideArrayRef.java index 1322077a217..36f26d78ca5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StrideArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StrideArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StrideOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StrideOptional.java index ef4aea11ab6..bc6000ebec1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StrideOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StrideOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class StrideOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVaryingShape.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVaryingShape.java index eaa86bd34df..49b83e8a1e1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVaryingShape.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVaryingShape.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -30,8 +31,8 @@ public class StrideVaryingShape extends Pointer { public StrideVaryingShape(@ByVal StrideArrayRef vec) { super((Pointer)null); allocate(vec); } private native void allocate(@ByVal StrideArrayRef vec); - public StrideVaryingShape(@ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional size) { super((Pointer)null); allocate(size); } - private native void allocate(@ByVal(nullValue = "c10::optional(c10::nullopt)") SizeTOptional size); + public StrideVaryingShape(@ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional size) { super((Pointer)null); allocate(size); } + private native void allocate(@ByVal(nullValue = "std::optional(c10::nullopt)") SizeTOptional size); public StrideVaryingShape() { super((Pointer)null); allocate(); } private native void allocate(); @@ -47,9 +48,9 @@ public class StrideVaryingShape extends Pointer { public native @ByVal SizeTOptional size(); - public native @Cast("const c10::optional::ListOfOptionalElements>*") @ByRef Pointer sizes(); + public native @Cast("const std::optional::ListOfOptionalElements>*") @ByRef Pointer sizes(); - + public native @ByVal StrideVaryingShape merge(@Const @ByRef StrideVaryingShape other); public native @ByVal StrideVectorOptional concrete_sizes(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVector.java index cf2049fc741..ff6780d7632 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVectorOptional.java index 151d588f4b1..b252d16c0ff 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StrideVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class StrideVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDict.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDict.java index 8706324bdde..9a8201b2ba4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDict.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDict.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDictItem.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDictItem.java index f09249aa8d4..b366dcfb7d1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDictItem.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDictItem.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDictItemVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDictItemVector.java index e23412834ab..1866a4d23a1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDictItemVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleDictItemVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModulePair.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModulePair.java index b111dc441b7..70d22d684df 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModulePair.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModulePair.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleVector.java index c30d8229297..27a890778d5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringAnyModuleVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringArrayRef.java index 1506e6f6dbb..cda27149dc9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringBoolMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringBoolMap.java index 2c0fc4d0951..492d9dc1c20 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringBoolMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringBoolMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringCordView.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringCordView.java index 467e294c0f1..4da061e4577 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringCordView.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringCordView.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringGenericListDict.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringGenericListDict.java index a0f7542777a..60ad540294e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringGenericListDict.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringGenericListDict.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,40 +13,27 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::Dict") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::Dict") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class StringGenericListDict extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public StringGenericListDict(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public StringGenericListDict(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public StringGenericListDict position(long position) { - return (StringGenericListDict)super.position(position); - } - @Override public StringGenericListDict getPointer(long i) { - return new StringGenericListDict((Pointer)this).offsetAddress(i); - } /** * Creates an empty dict. */ - public StringGenericListDict() { super((Pointer)null); allocate(); } - private native void allocate(); /** * Create a generic dict with runtime type information. * This only works for c10::impl::GenericDict and is not part of the public API * but only supposed to be used internally by PyTorch. */ - - - public StringGenericListDict(@Const @ByRef StringGenericListDict arg0) { super((Pointer)null); allocate(arg0); } - private native void allocate(@Const @ByRef StringGenericListDict arg0); public native @ByRef @Name("operator =") StringGenericListDict put(@Const @ByRef StringGenericListDict arg0); /** @@ -61,13 +47,13 @@ public class StringGenericListDict extends Pointer { * Returns an iterator to the first element of the container. * If the container is empty, the returned iterator will be equal to end(). */ - public native @ByVal @Cast("c10::Dict::iterator*") GenericDictIterator begin(); + public native @ByVal StringGenericListDictIterator begin(); /** * Returns an iterator to the element following the last element of the container. * This element acts as a placeholder; attempting to access it results in undefined behavior. */ - public native @ByVal @Cast("c10::Dict::iterator*") GenericDictIterator end(); + public native @ByVal StringGenericListDictIterator end(); /** * Checks if the container has no elements. @@ -105,7 +91,7 @@ public class StringGenericListDict extends Pointer { * May invalidate any references, pointers, or iterators referring to contained elements. * The iterator iter must be valid and dereferenceable. Thus the end() iterator (which is valid, but is not dereferenceable) cannot be used as a value for iter. */ - public native void erase(@ByVal @Cast("c10::Dict::iterator*") GenericDictIterator iter); + public native void erase(@ByVal StringGenericListDictIterator iter); /** * Removes the element with the given key, if it exists. @@ -129,8 +115,8 @@ public class StringGenericListDict extends Pointer { * @return Iterator to an element with key equivalent to key. * If no such element is found, past-the-end (see end()) iterator is returned. */ - public native @ByVal @Cast("c10::Dict::iterator*") GenericDictIterator find(@StdString BytePointer key); - public native @ByVal @Cast("c10::Dict::iterator*") GenericDictIterator find(@StdString String key); + public native @ByVal StringGenericListDictIterator find(@StdString BytePointer key); + public native @ByVal StringGenericListDictIterator find(@StdString String key); /** * Checks if there is an element with key equivalent to key in the container. diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringGenericListDictIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringGenericListDictIterator.java new file mode 100644 index 00000000000..11987006a9c --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringGenericListDictIterator.java @@ -0,0 +1,45 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("c10::impl::DictIterator") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class StringGenericListDictIterator extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public StringGenericListDictIterator(Pointer p) { super(p); } + + // C++17 friendly std::iterator implementation + public native @ByRef @Name("operator =") StringGenericListDictIterator put(@Const @ByRef StringGenericListDictIterator rhs); + + public native @ByRef @Name("operator ++") StringGenericListDictIterator increment(); + + public native @ByVal @Name("operator ++") StringGenericListDictIterator increment(int arg0); + + public native @Const @ByRef @Name("operator *") GenericDictEntryRef multiply(); + + public native @Const @Name("operator ->") GenericDictEntryRef access(); + + + + private static native @Namespace @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef StringGenericListDictIterator lhs, @Const @ByRef StringGenericListDictIterator rhs); + public boolean equals(StringGenericListDictIterator rhs) { return equals(this, rhs); } + + private static native @Namespace @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef StringGenericListDictIterator lhs, @Const @ByRef StringGenericListDictIterator rhs); + public boolean notEquals(StringGenericListDictIterator rhs) { return notEquals(this, rhs); } +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringIValueMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringIValueMap.java index 32057907f6d..c4bbc201f4b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringIValueMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringIValueMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringIntMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringIntMap.java index 06c99b20c37..fcbd44376fc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringIntMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringIntMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringLiteral.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringLiteral.java index 2967eae4c49..4c7c847c4c8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringLiteral.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringLiteral.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class StringLiteral extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public StringLiteral(Pointer p) { super(p); } - public StringLiteral(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public StringLiteral(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @StdString BytePointer text(); public static native @ByVal StringLiteral create( @Const @ByRef SourceRange range, diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringLongMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringLongMap.java index 9e4c938de80..ae9aa783994 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringLongMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringLongMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringLongVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringLongVector.java index 126c94f8108..4f18ba37dec 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringLongVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringLongVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringOptional.java index 7f5ed349665..d0f761dfaae 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class StringOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSet.java index 61c649c66db..50e7df8a151 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSet.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSet.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDict.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDict.java index db63d62880d..179b0d1676b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDict.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDict.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDictItem.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDictItem.java index 1500915d00d..4f2565bba83 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDictItem.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDictItem.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDictItemVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDictItemVector.java index 585587b12c7..e7d4f8079c9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDictItemVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleDictItemVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModulePair.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModulePair.java index eeb91d99859..81d3678a959 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModulePair.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModulePair.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleVector.java index 47462e9d65a..249614c553f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSharedModuleVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSizeTMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSizeTMap.java index a77da2ca5c7..62775dee36d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringSizeTMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringSizeTMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringStringMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringStringMap.java index 61fbcf4dd87..77212a07287 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringStringMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringStringMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDict.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDict.java index e3e9e0f9b56..a5d1f5cb814 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDict.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDict.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDictItem.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDictItem.java index d27b3aa2d4e..b021cf9136c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDictItem.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDictItem.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDictItemVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDictItemVector.java index 8464daad7bc..8fde2762d09 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDictItemVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorDictItemVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TreeRefStringMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorMap.java similarity index 65% rename from pytorch/src/gen/java/org/bytedeco/pytorch/TreeRefStringMap.java rename to pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorMap.java index 73ad005df39..f540b64db65 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TreeRefStringMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,24 +13,25 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("std::unordered_map") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class TreeRefStringMap extends Pointer { +@Name("std::map") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class StringTensorMap extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public TreeRefStringMap(Pointer p) { super(p); } - public TreeRefStringMap() { allocate(); } + public StringTensorMap(Pointer p) { super(p); } + public StringTensorMap() { allocate(); } private native void allocate(); - public native @Name("operator =") @ByRef TreeRefStringMap put(@ByRef TreeRefStringMap x); + public native @Name("operator =") @ByRef StringTensorMap put(@ByRef StringTensorMap x); public boolean empty() { return size() == 0; } public native long size(); - @Index public native @StdString BytePointer get(@ByRef TreeRef i); - public native TreeRefStringMap put(@ByRef TreeRef i, BytePointer value); - @ValueSetter @Index public native TreeRefStringMap put(@ByRef TreeRef i, @StdString String value); + @Index public native @ByRef Tensor get(@StdString BytePointer i); + public native StringTensorMap put(@StdString BytePointer i, Tensor value); public native void erase(@ByVal Iterator pos); public native @ByVal Iterator begin(); @@ -42,8 +42,8 @@ public Iterator() { } public native @Name("operator ++") @ByRef Iterator increment(); public native @Name("operator ==") boolean equals(@ByRef Iterator it); - public native @Name("operator *().first") @MemberGetter @ByRef @Const TreeRef first(); - public native @Name("operator *().second") @MemberGetter @StdString BytePointer second(); + public native @Name("operator *().first") @MemberGetter @StdString BytePointer first(); + public native @Name("operator *().second") @MemberGetter @ByRef @Const Tensor second(); } } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorPair.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorPair.java index 81a2171566c..978dae070bc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorPair.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorPair.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorVector.java index 11587056be4..54e716cb842 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTensorVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringType.java index 288c956e58f..04c514081e8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -26,7 +27,7 @@ public class StringType extends Type { public native @Cast("bool") boolean equals(@Const @ByRef Type rhs); public native @StdString BytePointer str(); - public native @StdString BytePointer annotation_str_impl(@ByVal(nullValue = "c10::TypePrinter(nullptr)") TypePrinter printer); + public native @StdString BytePointer annotation_str_impl(@Const @ByRef(nullValue = "c10::TypePrinter(nullptr)") TypePrinter printer); public native @StdString BytePointer annotation_str_impl(); @MemberGetter public static native TypeKind Kind(); // global singleton diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTypePtr.java index 531a4bee46a..d66006a1249 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringValueMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringValueMap.java index 64fbe7eeb0c..44ea048eba1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringValueMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringValueMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringVector.java index cc880cbe8d0..2d3f5cee454 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringVectorOptional.java index 2b4fb659de5..6f817f97503 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class StringVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewOptional.java index 5abb11ed648..a1b48a7697e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class StringViewOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewVector.java index a93a3c87621..8bec5a60d09 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewVectorOptional.java index 8c232cc09bb..0d9d1bbcfe1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StringViewVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class StringViewVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/StrongTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/StrongTypePtr.java index bd40e5e88c7..bd6b359f87f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/StrongTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/StrongTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -28,6 +29,6 @@ public class StrongTypePtr extends Pointer { public StrongTypePtr(Pointer p) { super(p); } - public native @SharedPtr CompilationUnit cu_(); public native StrongTypePtr cu_(CompilationUnit setter); + public native @SharedPtr("torch::jit::CompilationUnit") @ByRef CompilationUnit cu_(); public native StrongTypePtr cu_(CompilationUnit setter); public native @ByRef Type.TypePtr type_(); public native StrongTypePtr type_(Type.TypePtr setter); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Subscript.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Subscript.java index e1420624118..8fee6c3a803 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Subscript.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Subscript.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Subscript extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Subscript(Pointer p) { super(p); } - public Subscript(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Subscript(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr value(); public native @ByVal ExprList subscript_exprs(); public static native @ByVal Subscript create( diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredEnumClass.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredEnumClass.java index 5ae3fb8c4c7..3081c021716 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredEnumClass.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredEnumClass.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredTupleValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredTupleValue.java index 66ef261fe64..d07087140ed 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredTupleValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredTupleValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -30,7 +31,7 @@ public class SugaredTupleValue extends SugaredValue { public native @ByVal SharedSugaredValueVector asTuple( @Const @ByRef SourceRange loc, @ByRef GraphFunction m, - @Const @ByRef(nullValue = "c10::optional{}") SizeTOptional size_hint); + @Const @ByRef(nullValue = "std::optional{}") SizeTOptional size_hint); public native @ByVal SharedSugaredValueVector asTuple( @Const @ByRef SourceRange loc, @ByRef GraphFunction m); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredValue.java index fa788c64409..4d98383b332 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SugaredValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -75,7 +76,7 @@ public native void setAttr( public native @ByVal SharedSugaredValueVector asTuple( @Const @ByRef SourceRange loc, @ByRef GraphFunction m, - @Const @ByRef(nullValue = "c10::optional{}") SizeTOptional size_hint); + @Const @ByRef(nullValue = "std::optional{}") SizeTOptional size_hint); public native @ByVal SharedSugaredValueVector asTuple( @Const @ByRef SourceRange loc, @ByRef GraphFunction m); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SwapSavedVariables.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SwapSavedVariables.java index 915246bdc66..edb8a9bffbc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SwapSavedVariables.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SwapSavedVariables.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,13 +13,82 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Namespace("torch::dynamo::autograd") @Opaque @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) + +@Namespace("torch::dynamo::autograd") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SwapSavedVariables extends Pointer { - /** Empty constructor. Calls {@code super((Pointer)null)}. */ - public SwapSavedVariables() { super((Pointer)null); } + static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public SwapSavedVariables(Pointer p) { super(p); } + + public native void before(@ByRef Tensor t); + public native void after(@ByRef Tensor t); + + public native void before(@ByRef SymInt t); + public native void after(@ByRef SymInt t); + + public native void before(@ByRef IValue t); + + public native void after(@ByRef IValue t); + + public native void before(@ByRef Edge t); + public native void after(@ByRef Edge t); + public native void before(@ByRef TensorGeometry t); + public native void after(@ByRef TensorGeometry t); + public native void before(@ByRef VariableInfo t); + public native void after(@ByRef VariableInfo t); + +// #define NO_OP_VISIT(T) +// void before(const T&) {} +// void after(const T&) {} + public native void before(@Const @ByRef TypeMeta arg0); + public native void after(@Const @ByRef TypeMeta arg0); + public native void before(@Const @ByRef Device arg0); + public native void after(@Const @ByRef Device arg0); + public native void before(DeviceType arg0); + public native void before(@Cast("c10::DeviceType") byte arg0); + public native void after(DeviceType arg0); + public native void after(@Cast("c10::DeviceType") byte arg0); + public native void before(Layout arg0); + public native void after(Layout arg0); + public native void before(MemoryFormat arg0); + public native void after(MemoryFormat arg0); + public native void before(ScalarType arg0); + public native void after(ScalarType arg0); + public native void before(@Const @ByRef Scalar arg0); + public native void after(@Const @ByRef Scalar arg0); + public native void before(@Const @ByRef TensorOptions arg0); + public native void after(@Const @ByRef TensorOptions arg0); + public native void before(@StdString BytePointer arg0); + public native void before(@StdString String arg0); + public native void after(@StdString BytePointer arg0); + public native void after(@StdString String arg0); + public native void before(@Cast("const int64_t") long arg0); + public native void after(@Cast("const int64_t") long arg0); + public native void before(@Cast("const bool") boolean arg0); + public native void after(@Cast("const bool") boolean arg0); + public native void before(double arg0); + public native void after(double arg0); +// #undef NO_OP_VISIT + + public SwapSavedVariables( + @ByRef AutogradCompilerCall c, + @ByRef TraceState s, + @Cast("PyObject*") Pointer p, + @Const @ByRef NodeCall n) { super((Pointer)null); allocate(c, s, p, n); } + private native void allocate( + @ByRef AutogradCompilerCall c, + @ByRef TraceState s, + @Cast("PyObject*") Pointer p, + @Const @ByRef NodeCall n); + + public native @Cast("PyObject*") Pointer get_py_compiler(); + + public native @Const @ByRef NodeCall get_curr_node_call(); + + public native void debug_asserts(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymBool.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymBool.java index 3a2aeea918d..dc89c77bf9d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymBool.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymBool.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -35,20 +36,20 @@ public class SymBool extends Pointer { /*implicit*/ public SymBool(@Cast("bool") boolean b) { super((Pointer)null); allocate(b); } private native void allocate(@Cast("bool") boolean b); - public SymBool(@ByVal SymNode ptr) { super((Pointer)null); allocate(ptr); } - private native void allocate(@ByVal SymNode ptr); + public SymBool(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode ptr) { super((Pointer)null); allocate(ptr); } + private native void allocate(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode ptr); public SymBool() { super((Pointer)null); allocate(); } private native void allocate(); - public native SymNodeImpl toSymNodeImplUnowned(); + public native SymNode toSymNodeImplUnowned(); // Only valid if is_heap_allocated() - public native @ByVal SymNode toSymNodeImpl(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode toSymNodeImpl(); // Guaranteed to return a SymNode, wrapping using base if necessary - public native @ByVal SymNode wrap_node(@Const @ByRef SymNode base); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode wrap_node(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode base); public native @Cast("bool") boolean expect_bool(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymBoolType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymBoolType.java index ba68b7db72b..a071630b7cb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymBoolType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymBoolType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,7 +26,7 @@ public class SymBoolType extends Type { public native @Cast("bool") boolean equals(@Const @ByRef Type rhs); public native @StdString BytePointer str(); - public native @StdString BytePointer annotation_str_impl(@ByVal(nullValue = "c10::TypePrinter(nullptr)") TypePrinter printer); + public native @StdString BytePointer annotation_str_impl(@Const @ByRef(nullValue = "c10::TypePrinter(nullptr)") TypePrinter printer); public native @StdString BytePointer annotation_str_impl(); @MemberGetter public static native TypeKind Kind(); // global singleton diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymDimVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymDimVector.java index d062eb655a0..0a4d3fe1360 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymDimVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymDimVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymDimVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymDimVectorOptional.java index ce39198edce..2761140aac5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymDimVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymDimVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SymDimVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymFloat.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymFloat.java index 8ec06df330a..4dc3c4fc716 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymFloat.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymFloat.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -27,20 +28,20 @@ public class SymFloat extends Pointer { /*implicit*/ public SymFloat(double d) { super((Pointer)null); allocate(d); } private native void allocate(double d); - public SymFloat(@ByVal SymNode ptr) { super((Pointer)null); allocate(ptr); } - private native void allocate(@ByVal SymNode ptr); + public SymFloat(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode ptr) { super((Pointer)null); allocate(ptr); } + private native void allocate(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode ptr); public SymFloat() { super((Pointer)null); allocate(); } private native void allocate(); - public native SymNodeImpl toSymNodeImplUnowned(); + public native SymNode toSymNodeImplUnowned(); // Only valid if is_symbolic() - public native @ByVal SymNode toSymNodeImpl(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode toSymNodeImpl(); // Guaranteed to return a SymNode, wrapping using base if necessary - public native @ByVal SymNode wrap_node(@Const @ByRef SymNode base); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode wrap_node(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode base); public native double expect_float(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymFloatType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymFloatType.java index b2efb0e97e5..776edd98717 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymFloatType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymFloatType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,7 +26,7 @@ public class SymFloatType extends Type { public native @Cast("bool") boolean equals(@Const @ByRef Type rhs); public native @StdString BytePointer str(); - public native @StdString BytePointer annotation_str_impl(@ByVal(nullValue = "c10::TypePrinter(nullptr)") TypePrinter printer); + public native @StdString BytePointer annotation_str_impl(@Const @ByRef(nullValue = "c10::TypePrinter(nullptr)") TypePrinter printer); public native @StdString BytePointer annotation_str_impl(); @MemberGetter public static native TypeKind Kind(); // global singleton diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymInt.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymInt.java index 2954f81248b..34087f6e6f1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymInt.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymInt.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -52,8 +53,8 @@ public enum Unchecked { private native void allocate(@Cast("int64_t") long d); public SymInt() { super((Pointer)null); allocate(); } private native void allocate(); - public SymInt(@ByVal SymNode n) { super((Pointer)null); allocate(n); } - private native void allocate(@ByVal SymNode n); + public SymInt(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode n) { super((Pointer)null); allocate(n); } + private native void allocate(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode n); // unchecked c-tor accepting raw `data_` // One appropriate use for this is when you are constructing a symint @@ -71,17 +72,17 @@ public enum Unchecked { public native @ByRef @Name("operator =") SymInt put(@Const @ByRef SymInt s); - public native SymNodeImpl toSymNodeImplUnowned(); + public native SymNode toSymNodeImplUnowned(); public native void release_(); // Only valid if is_heap_allocated() - public native @ByVal SymNode toSymNode(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode toSymNode(); // Guaranteed to return a SymNode, wrapping using base if necessary - public native @ByVal SymNode wrap_node(@Const @ByRef SymNode base); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode wrap_node(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode base); // Require the int to be non-symbolic, and if it is symbolic raise an // error. This is safe to use for C++ code that doesn't work for symbolic diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntArrayRef.java index 8ebb1c45be7..433437390e1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntArrayRefOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntArrayRefOptional.java index 9fd43403601..d5ea8749793 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntArrayRefOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntArrayRefOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SymIntArrayRefOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntOptional.java index ee80e41b036..0cea8ca2309 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SymIntOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorBase.java index cf30007150b..f5ad7818c86 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -37,5 +38,7 @@ public class SymIntSmallVectorBase extends SymIntSmallVectorCommon { public native void push_back(@Const @ByRef SymInt Elt); + // NOLINTNEXTLINE(cppcoreguidelines-rvalue-reference-param-not-moved) + public native void pop_back(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorCommon.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorCommon.java index 1e1916ef768..7af671e4c0a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorCommon.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorCommon.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorImpl.java index 373f2a582b2..49d5f684076 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntSmallVectorImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -58,9 +59,9 @@ public class SymIntSmallVectorImpl extends SymIntSmallVectorBase { public native void assign(@Const @ByRef SymIntSmallVectorImpl RHS); - public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") SymInt erase(@ByVal @Cast("c10::SmallVectorImpl::const_iterator*") SymInt CI); + public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") SymInt erase(@ByVal @Cast("c10::SmallVectorImpl::iterator*") SymInt I); - public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") SymInt erase(@ByVal @Cast("c10::SmallVectorImpl::const_iterator*") SymInt CS, @ByVal @Cast("c10::SmallVectorImpl::const_iterator*") SymInt CE); + public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") SymInt erase(@ByVal @Cast("c10::SmallVectorImpl::iterator*") SymInt S, @ByVal @Cast("c10::SmallVectorImpl::iterator*") SymInt E); public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") SymInt insert(@ByVal @Cast("c10::SmallVectorImpl::iterator*") SymInt I, @ByRef(true) SymInt Elt); public native @ByVal @Cast("c10::SmallVectorImpl::iterator*") SymInt insert(@ByVal @Cast("c10::SmallVectorImpl::iterator*") SymInt I, long NumToInsert, @ByVal SymInt Elt); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntType.java index 11faea38fd0..8a9ab5c729d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,7 +26,7 @@ public class SymIntType extends Type { public native @Cast("bool") boolean equals(@Const @ByRef Type rhs); public native @StdString BytePointer str(); - public native @StdString BytePointer annotation_str_impl(@ByVal(nullValue = "c10::TypePrinter(nullptr)") TypePrinter printer); + public native @StdString BytePointer annotation_str_impl(@Const @ByRef(nullValue = "c10::TypePrinter(nullptr)") TypePrinter printer); public native @StdString BytePointer annotation_str_impl(); @MemberGetter public static native TypeKind Kind(); // global singleton diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntVector.java index 7304cb2262f..a5285974a0a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymIntVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymNode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymNode.java index 8f9b11012c7..507963bbcec 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymNode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymNode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,141 +13,107 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +// When you add a method, you also need to edit +// torch/csrc/jit/python/init.cpp +// torch/csrc/utils/python_symnode.h +// c10/core/ConstantSymNodeImpl.h +@Name("c10::SymNodeImpl") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SymNode extends Pointer { static { Loader.load(); } + /** Default native constructor. */ + public SymNode() { super((Pointer)null); allocate(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public SymNode(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public SymNode(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public SymNode position(long position) { - return (SymNode)super.position(position); - } - @Override public SymNode getPointer(long i) { - return new SymNode((Pointer)this).offsetAddress(i); - } - - - public SymNode() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public SymNode(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public SymNode(SymNodeImpl target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(SymNodeImpl target, @ByVal DontIncreaseRefcount arg1); - - - - public SymNode(@ByRef(true) SymNode rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) SymNode rhs); - - public native @ByRef @Name("operator =") @NoException(true) SymNode put(@ByRef(true) SymNode rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) SymNodeImpl get(); - - public native @ByRef @Name("operator *") @NoException(true) SymNodeImpl multiply(); - - public native @Name("operator ->") @NoException(true) SymNodeImpl access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef SymNode rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) SymNodeImpl release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal SymNode reclaim(SymNodeImpl owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal SymNode reclaim_copy(SymNodeImpl owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal SymNode unsafe_steal_from_new(SymNodeImpl raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal SymNode unsafe_adapt_non_heap_allocated( - SymNodeImpl raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal SymNode unsafe_reclaim_from_nonowning(SymNodeImpl raw_ptr); + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(); + + + // these could be pure virtual when we implement LTC versions + public native @Cast("bool") boolean is_int(); + public native @Cast("bool") boolean is_bool(); + public native @Cast("bool") boolean is_float(); + public native @Cast("bool") boolean is_nested_int(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode add(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode sub(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode mul(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + // NB: legacy, prefer float_truediv or int_truediv + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode truediv(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode float_truediv(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode int_truediv(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + // NB: legacy, prefer float_pow or pow_by_natural + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode pow(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode float_pow(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode pow_by_natural(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + // NB: legacy, prefer int_floordiv + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode floordiv(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode int_floordiv(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode mod(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode eq(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode ne(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode gt(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode lt(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode le(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode ge(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode ceil(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode floor(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode neg(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode sym_min(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode sym_max(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode sym_or(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode sym_and(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode other); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode sym_not(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode sym_ite(@IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode then_val, @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode else_val); + // NB: self is ignored here, only the arguments are used + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode is_contiguous( + @ByVal SymNodeArrayRef sizes, + @ByVal SymNodeArrayRef strides); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode is_channels_last_contiguous_2d( + @ByVal SymNodeArrayRef sizes, + @ByVal SymNodeArrayRef strides); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode is_channels_last_contiguous_3d( + @ByVal SymNodeArrayRef sizes, + @ByVal SymNodeArrayRef strides); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode is_channels_last_strides_2d( + @ByVal SymNodeArrayRef sizes, + @ByVal SymNodeArrayRef strides); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode is_channels_last_strides_3d( + @ByVal SymNodeArrayRef sizes, + @ByVal SymNodeArrayRef strides); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode is_non_overlapping_and_dense( + @ByVal SymNodeArrayRef sizes, + @ByVal SymNodeArrayRef strides); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode clone(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode sym_float(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode wrap_int(@Cast("int64_t") long num); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode wrap_float(double num); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode wrap_bool(@Cast("bool") boolean num); + public native @Cast("int64_t") long guard_int(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); + public native @Cast("int64_t") long guard_int(String file, @Cast("int64_t") long line); + public native @Cast("bool") boolean guard_bool(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); + public native @Cast("bool") boolean guard_bool(String file, @Cast("int64_t") long line); + public native double guard_float(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); + public native double guard_float(String file, @Cast("int64_t") long line); + public native @Cast("bool") boolean guard_size_oblivious(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); + public native @Cast("bool") boolean guard_size_oblivious(String file, @Cast("int64_t") long line); + public native @Cast("bool") boolean expect_true(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); + public native @Cast("bool") boolean expect_true(String file, @Cast("int64_t") long line); + public native @Cast("bool") boolean expect_size(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); + public native @Cast("bool") boolean expect_size(String file, @Cast("int64_t") long line); + public native @Cast("int64_t") long int_(); + public native @Cast("bool") boolean bool_(); + public native @Cast("bool") boolean has_hint(); + public native @StdString BytePointer str(); + public native @ByVal LongOptional nested_int(); + public native @ByVal LongOptional nested_int_coeff(); + public native @ByVal LongOptional constant_int(); + public native @ByVal BoolOptional constant_bool(); + public native @ByVal LongOptional maybe_as_int(); + public native @Cast("bool") boolean is_constant(); + public native @Cast("bool") boolean is_symbolic(); + public native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer os); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymNodeArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymNodeArrayRef.java index ca637f4c967..1468680ed02 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymNodeArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymNodeArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::ArrayRef") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::ArrayRef >") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class SymNodeArrayRef extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ @@ -44,12 +45,12 @@ public class SymNodeArrayRef extends Pointer { /** Construct an ArrayRef from a pointer and length. */ - public SymNodeArrayRef(@Const SymNode data, @Cast("size_t") long length) { super((Pointer)null); allocate(data, length); } - private native void allocate(@Const SymNode data, @Cast("size_t") long length); + public SymNodeArrayRef(@Const @IntrusivePtr("c10::SymNodeImpl") SymNode data, @Cast("size_t") long length) { super((Pointer)null); allocate(data, length); } + private native void allocate(@Const @IntrusivePtr("c10::SymNodeImpl") SymNode data, @Cast("size_t") long length); /** Construct an ArrayRef from a range. */ - public SymNodeArrayRef(@Const SymNode begin, @Const SymNode end) { super((Pointer)null); allocate(begin, end); } - private native void allocate(@Const SymNode begin, @Const SymNode end); + public SymNodeArrayRef(@Const @IntrusivePtr("c10::SymNodeImpl") SymNode begin, @Const @IntrusivePtr("c10::SymNodeImpl") SymNode end) { super((Pointer)null); allocate(begin, end); } + private native void allocate(@Const @IntrusivePtr("c10::SymNodeImpl") SymNode begin, @Const @IntrusivePtr("c10::SymNodeImpl") SymNode end); /** Construct an ArrayRef from a SmallVector. This is templated in order to * avoid instantiating SmallVectorTemplateCommon whenever we @@ -59,6 +60,8 @@ public class SymNodeArrayRef extends Pointer { // The enable_if stuff here makes sure that this isn't used for // std::vector, because ArrayRef can't work on a std::vector // bitfield. + public SymNodeArrayRef(@ByRef SymNodeVector vec) { super((Pointer)null); allocate(vec); } + private native void allocate(@ByRef SymNodeVector vec); /** Construct an ArrayRef from a std::array */ @@ -82,16 +85,16 @@ public class SymNodeArrayRef extends Pointer { /** empty - Check if the array is empty. */ public native @Cast("const bool") boolean empty(); - public native @Const SymNode data(); + public native @Const @IntrusivePtr("c10::SymNodeImpl") SymNode data(); /** size - Get the array size. */ public native @Cast("const size_t") long size(); /** front - Get the first element. */ - public native @Const @ByRef SymNode front(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode front(); /** back - Get the last element. */ - public native @Const @ByRef SymNode back(); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode back(); /** equals - Check for element-wise equality. */ public native @Cast("const bool") boolean equals(@ByVal SymNodeArrayRef RHS); @@ -105,12 +108,12 @@ public class SymNodeArrayRef extends Pointer { /** \} * \name Operator Overloads * \{ */ - public native @Const @ByRef @Name("operator []") SymNode get(@Cast("size_t") long Index); + public native @IntrusivePtr("c10::SymNodeImpl") @Name("operator []") @Cast({"", "c10::intrusive_ptr&"}) SymNode get(@Cast("size_t") long Index); /** Vector compatibility */ /// - public native @Const @ByRef SymNode at(@Cast("size_t") long Index); + public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode at(@Cast("size_t") long Index); /** Disallow accidental assignment from a temporary. * @@ -127,7 +130,7 @@ public class SymNodeArrayRef extends Pointer { /** \} * \name Expensive Operations * \{ */ - public native @StdVector SymNode vec(); + public native @ByVal SymNodeVector vec(); /** \} */ } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymNodeImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymNodeImpl.java deleted file mode 100644 index c10e46fd3f6..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymNodeImpl.java +++ /dev/null @@ -1,119 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -// When you add a method, you also need to edit -// torch/csrc/jit/python/init.cpp -// torch/csrc/utils/python_symnode.h -// c10/core/ConstantSymNodeImpl.h -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class SymNodeImpl extends Pointer { - static { Loader.load(); } - /** Default native constructor. */ - public SymNodeImpl() { super((Pointer)null); allocate(); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public SymNodeImpl(long size) { super((Pointer)null); allocateArray(size); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public SymNodeImpl(Pointer p) { super(p); } - private native void allocate(); - private native void allocateArray(long size); - @Override public SymNodeImpl position(long position) { - return (SymNodeImpl)super.position(position); - } - @Override public SymNodeImpl getPointer(long i) { - return new SymNodeImpl((Pointer)this).offsetAddress(i); - } - - - // these could be pure virtual when we implement LTC versions - public native @Cast("bool") boolean is_int(); - public native @Cast("bool") boolean is_bool(); - public native @Cast("bool") boolean is_float(); - public native @Cast("bool") boolean is_nested_int(); - public native @ByVal SymNode add(@Const @ByRef SymNode other); - public native @ByVal SymNode sub(@Const @ByRef SymNode other); - public native @ByVal SymNode mul(@Const @ByRef SymNode other); - public native @ByVal SymNode truediv(@Const @ByRef SymNode other); - public native @ByVal SymNode pow(@Const @ByRef SymNode other); - public native @ByVal SymNode floordiv(@Const @ByRef SymNode other); - public native @ByVal SymNode mod(@Const @ByRef SymNode other); - public native @ByVal SymNode eq(@Const @ByRef SymNode other); - public native @ByVal SymNode ne(@Const @ByRef SymNode other); - public native @ByVal SymNode gt(@Const @ByRef SymNode other); - public native @ByVal SymNode lt(@Const @ByRef SymNode other); - public native @ByVal SymNode le(@Const @ByRef SymNode other); - public native @ByVal SymNode ge(@Const @ByRef SymNode other); - public native @ByVal SymNode ceil(); - public native @ByVal SymNode floor(); - public native @ByVal SymNode neg(); - public native @ByVal SymNode sym_min(@Const @ByRef SymNode other); - public native @ByVal SymNode sym_max(@Const @ByRef SymNode other); - public native @ByVal SymNode sym_or(@Const @ByRef SymNode other); - public native @ByVal SymNode sym_and(@Const @ByRef SymNode other); - public native @ByVal SymNode sym_not(); - public native @ByVal SymNode sym_ite(@Const @ByRef SymNode then_val, @Const @ByRef SymNode else_val); - // NB: self is ignored here, only the arguments are used - public native @ByVal SymNode is_contiguous( - @ByVal SymNodeArrayRef sizes, - @ByVal SymNodeArrayRef strides); - public native @ByVal SymNode is_channels_last_contiguous_2d( - @ByVal SymNodeArrayRef sizes, - @ByVal SymNodeArrayRef strides); - public native @ByVal SymNode is_channels_last_contiguous_3d( - @ByVal SymNodeArrayRef sizes, - @ByVal SymNodeArrayRef strides); - public native @ByVal SymNode is_channels_last_strides_2d( - @ByVal SymNodeArrayRef sizes, - @ByVal SymNodeArrayRef strides); - public native @ByVal SymNode is_channels_last_strides_3d( - @ByVal SymNodeArrayRef sizes, - @ByVal SymNodeArrayRef strides); - public native @ByVal SymNode is_non_overlapping_and_dense( - @ByVal SymNodeArrayRef sizes, - @ByVal SymNodeArrayRef strides); - public native @ByVal SymNode clone(); - public native @ByVal SymNode sym_float(); - public native @ByVal SymNode wrap_int(@Cast("int64_t") long num); - public native @ByVal SymNode wrap_float(double num); - public native @ByVal SymNode wrap_bool(@Cast("bool") boolean num); - public native @Cast("int64_t") long guard_int(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); - public native @Cast("int64_t") long guard_int(String file, @Cast("int64_t") long line); - public native @Cast("bool") boolean guard_bool(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); - public native @Cast("bool") boolean guard_bool(String file, @Cast("int64_t") long line); - public native double guard_float(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); - public native double guard_float(String file, @Cast("int64_t") long line); - public native @Cast("bool") boolean guard_size_oblivious(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); - public native @Cast("bool") boolean guard_size_oblivious(String file, @Cast("int64_t") long line); - public native @Cast("bool") boolean expect_true(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); - public native @Cast("bool") boolean expect_true(String file, @Cast("int64_t") long line); - public native @Cast("bool") boolean expect_size(@Cast("const char*") BytePointer file, @Cast("int64_t") long line); - public native @Cast("bool") boolean expect_size(String file, @Cast("int64_t") long line); - public native @Cast("int64_t") long int_(); - public native @Cast("bool") boolean bool_(); - public native @Cast("bool") boolean has_hint(); - public native @StdString BytePointer str(); - public native @ByVal LongOptional nested_int(); - public native @ByVal LongOptional nested_int_coeff(); - public native @ByVal LongOptional constant_int(); - public native @ByVal BoolOptional constant_bool(); - public native @ByVal LongOptional maybe_as_int(); - public native @Cast("bool") boolean is_constant(); - public native @Cast("bool") boolean is_symbolic(); - public native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer os); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymNodeVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymNodeVector.java new file mode 100644 index 00000000000..ccd9c0b2fe5 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymNodeVector.java @@ -0,0 +1,91 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class SymNodeVector extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public SymNodeVector(Pointer p) { super(p); } + public SymNodeVector(@Cast({"", "c10::intrusive_ptr&"}) SymNode value) { this(1); put(0, value); } + public SymNodeVector(@Cast({"", "c10::intrusive_ptr&"}) SymNode ... array) { this(array.length); put(array); } + public SymNodeVector() { allocate(); } + public SymNodeVector(long n) { allocate(n); } + private native void allocate(); + private native void allocate(@Cast("size_t") long n); + public native @Name("operator =") @ByRef SymNodeVector put(@ByRef SymNodeVector x); + + public boolean empty() { return size() == 0; } + public native long size(); + public void clear() { resize(0); } + public native void resize(@Cast("size_t") long n); + + public SymNode front() { return get(0); } + public SymNode back() { return get(size() - 1); } + @Index(function = "at") public native @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode get(@Cast("size_t") long i); + public native SymNodeVector put(@Cast("size_t") long i, SymNode value); + + public native @ByVal Iterator insert(@ByVal Iterator pos, @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode value); + public native @ByVal Iterator erase(@ByVal Iterator pos); + public native @ByVal Iterator begin(); + public native @ByVal Iterator end(); + @NoOffset @Name("iterator") public static class Iterator extends Pointer { + public Iterator(Pointer p) { super(p); } + public Iterator() { } + + public native @Name("operator ++") @ByRef Iterator increment(); + public native @Name("operator ==") boolean equals(@ByRef Iterator it); + public native @Name("operator *") @IntrusivePtr("c10::SymNodeImpl") @Cast({"", "c10::intrusive_ptr&"}) SymNode get(); + } + + public SymNode[] get() { + SymNode[] array = new SymNode[size() < Integer.MAX_VALUE ? (int)size() : Integer.MAX_VALUE]; + for (int i = 0; i < array.length; i++) { + array[i] = get(i); + } + return array; + } + @Override public String toString() { + return java.util.Arrays.toString(get()); + } + + public SymNode pop_back() { + long size = size(); + SymNode value = get(size - 1); + resize(size - 1); + return value; + } + public SymNodeVector push_back(SymNode value) { + long size = size(); + resize(size + 1); + return put(size, value); + } + public SymNodeVector put(SymNode value) { + if (size() != 1) { resize(1); } + return put(0, value); + } + public SymNodeVector put(SymNode ... array) { + if (size() != array.length) { resize(array.length); } + for (int i = 0; i < array.length; i++) { + put(i, array[i]); + } + return this; + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Symbol.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Symbol.java index e7c10f0506f..52fa7cc9c33 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Symbol.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Symbol.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolArrayRef.java index de45cb43205..fb48ba53461 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolSet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolSet.java index 898c8c31c5b..906d70bb7d1 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolSet.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolSet.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolVector.java index 10e974170ff..5e5e84d41c8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolicShape.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolicShape.java index 50e57ed246c..1b6b2a2dd4b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolicShape.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolicShape.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolicShapeMeta.java b/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolicShapeMeta.java index 20c7c391925..8bb49867491 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolicShapeMeta.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/SymbolicShapeMeta.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_DataPtrSizeT_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_DataPtrSizeT_T.java index 095c067bae1..1a5b404cf77 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_DataPtrSizeT_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_DataPtrSizeT_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_IntInt_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_IntInt_T.java index 7ba9a17a9c5..4d2a31fc904 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_IntInt_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_IntInt_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_LongLong_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_LongLong_T.java index 0d2c5a934af..b06575f0956 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_LongLong_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_LongLong_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_PackedSequenceT_TensorTensor_T_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_PackedSequenceT_TensorTensor_T_T.java index 3bbfe817361..becf0cf2d72 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_PackedSequenceT_TensorTensor_T_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_PackedSequenceT_TensorTensor_T_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_PackedSequenceTensor_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_PackedSequenceTensor_T.java index 72aa159b1ce..98409a7b1d9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_PackedSequenceTensor_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_PackedSequenceTensor_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_PyObject_TorchDispatchModeTorchDispatchModeKey_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_PyObject_TorchDispatchModeTorchDispatchModeKey_T.java new file mode 100644 index 00000000000..1208ed7275c --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_PyObject_TorchDispatchModeTorchDispatchModeKey_T.java @@ -0,0 +1,37 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@NoOffset @Name("std::tuple,c10::impl::TorchDispatchModeKey>") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class T_PyObject_TorchDispatchModeTorchDispatchModeKey_T extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public T_PyObject_TorchDispatchModeTorchDispatchModeKey_T(Pointer p) { super(p); } + public T_PyObject_TorchDispatchModeTorchDispatchModeKey_T(@SharedPtr("c10::impl::PyObject_TorchDispatchMode") PyObject_TorchDispatchMode value0, @ByRef TorchDispatchModeKey value1) { allocate(value0, value1); } + private native void allocate(@SharedPtr("c10::impl::PyObject_TorchDispatchMode") PyObject_TorchDispatchMode value0, @ByRef TorchDispatchModeKey value1); + public T_PyObject_TorchDispatchModeTorchDispatchModeKey_T() { allocate(); } + private native void allocate(); + public native @Name("operator =") @ByRef T_PyObject_TorchDispatchModeTorchDispatchModeKey_T put(@ByRef T_PyObject_TorchDispatchModeTorchDispatchModeKey_T x); + + public @SharedPtr("c10::impl::PyObject_TorchDispatchMode") PyObject_TorchDispatchMode get0() { return get0(this); } + @Namespace @Name("std::get<0>") public static native @SharedPtr("c10::impl::PyObject_TorchDispatchMode") PyObject_TorchDispatchMode get0(@ByRef T_PyObject_TorchDispatchModeTorchDispatchModeKey_T container); + public @ByRef TorchDispatchModeKey get1() { return get1(this); } + @Namespace @Name("std::get<1>") public static native @ByRef TorchDispatchModeKey get1(@ByRef T_PyObject_TorchDispatchModeTorchDispatchModeKey_T container); +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_SafePyObjectTorchDispatchModeKey_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_SafePyObjectTorchDispatchModeKey_T.java index 0ab1225d177..5931209cdca 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_SafePyObjectTorchDispatchModeKey_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_SafePyObjectTorchDispatchModeKey_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_SizeTVectorVectorSizeTVector_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_SizeTVectorVectorSizeTVector_T.java new file mode 100644 index 00000000000..b31adf21ad6 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_SizeTVectorVectorSizeTVector_T.java @@ -0,0 +1,37 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@NoOffset @Name("std::tuple >,std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class T_SizeTVectorVectorSizeTVector_T extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public T_SizeTVectorVectorSizeTVector_T(Pointer p) { super(p); } + public T_SizeTVectorVectorSizeTVector_T(@ByRef SizeTVectorVector value0, @Cast("std::vector*") @ByRef SizeTVector value1) { allocate(value0, value1); } + private native void allocate(@ByRef SizeTVectorVector value0, @Cast("std::vector*") @ByRef SizeTVector value1); + public T_SizeTVectorVectorSizeTVector_T() { allocate(); } + private native void allocate(); + public native @Name("operator =") @ByRef T_SizeTVectorVectorSizeTVector_T put(@ByRef T_SizeTVectorVectorSizeTVector_T x); + + public @ByRef SizeTVectorVector get0() { return get0(this); } + @Namespace @Name("std::get<0>") public static native @ByRef SizeTVectorVector get0(@ByRef T_SizeTVectorVectorSizeTVector_T container); + public @Cast("std::vector*") @ByRef SizeTVector get1() { return get1(this); } + @Namespace @Name("std::get<1>") public static native @Cast("std::vector*") @ByRef SizeTVector get1(@ByRef T_SizeTVectorVectorSizeTVector_T container); +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_StringSizeTSizeT_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_StringSizeTSizeT_T.java index 9ef7f8d44e3..541f8a0b634 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_StringSizeTSizeT_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_StringSizeTSizeT_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_StringSizeTSizeT_TOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_StringSizeTSizeT_TOptional.java index f0dad752cd6..32a9a186489 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_StringSizeTSizeT_TOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_StringSizeTSizeT_TOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class T_StringSizeTSizeT_TOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorMaybeOwnedTensorMaybeOwnedTensorMaybeOwned_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorMaybeOwnedTensorMaybeOwnedTensorMaybeOwned_T.java index d6ebd89d280..1f08b4475aa 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorMaybeOwnedTensorMaybeOwnedTensorMaybeOwned_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorMaybeOwnedTensorMaybeOwnedTensorMaybeOwned_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorMaybeOwnedTensorMaybeOwned_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorMaybeOwnedTensorMaybeOwned_T.java index ab9e6cced67..40512564e0e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorMaybeOwnedTensorMaybeOwned_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorMaybeOwnedTensorMaybeOwned_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorT_TensorTensor_T_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorT_TensorTensor_T_T.java index 027b18508e6..c72112ac8d4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorT_TensorTensor_T_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorT_TensorTensor_T_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorDoubleLong_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorDoubleLong_T.java index ec634e8c076..3d6a142ec7c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorDoubleLong_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorDoubleLong_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorTensorTensorTensor_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorTensorTensorTensor_T.java index 0f8c854303b..f0757802e82 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorTensorTensorTensor_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorTensorTensorTensor_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorTensor_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorTensor_T.java index 352f5bd467f..920a8192161 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorTensor_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorTensor_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorVector_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorVector_T.java index 123c62399be..98959de3342 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorVector_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensorVector_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensor_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensor_T.java index 04b9e533646..a04abc25dec 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensor_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensorTensor_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensor_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensor_T.java index b5cdae48936..efe5344f4a0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensor_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorTensor_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorVectorTensorVector_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorVectorTensorVector_T.java index 843f570a9b2..dfde147fb8c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorVectorTensorVector_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorVectorTensorVector_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorVector_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorVector_T.java index ee1cd5741d9..c5df5747e8c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorVector_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensorVector_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensor_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensor_T.java index 6d0206a271c..69fc8a171bf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensor_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensor_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensor_TOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensor_TOptional.java index e978029187b..b25e31adf7e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensor_TOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TensorTensor_TOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TypePtrLong_T.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TypePtrLong_T.java index 5bf03e49e80..de405c6af09 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TypePtrLong_T.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TypePtrLong_T.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TypePtrLong_TOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TypePtrLong_TOptional.java index 4825f37ca95..938fdf173b8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/T_TypePtrLong_TOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/T_TypePtrLong_TOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class T_TypePtrLong_TOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TagArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TagArrayRef.java index 1d6aece3ed4..531e84fefdf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TagArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TagArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TagVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TagVector.java index 9781f5b9dd7..70d25fbe129 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TagVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TagVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TanhImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TanhImpl.java index 1cc7b4be3b9..62c6b4b028a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TanhImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TanhImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tanh ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies Tanh over a given input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Tanh to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Tanh to learn * about the exact behavior of this module. */ @Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TanhImpl extends TanhImplCloneable { diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TanhImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TanhImplCloneable.java index d47b9e29c2c..804c8aaf7c4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TanhImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TanhImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class TanhImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TanhshrinkImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TanhshrinkImpl.java index 6fe1ce3447a..7645b9eea3a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TanhshrinkImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TanhshrinkImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tanhshrink ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies Tanhshrink over a given input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Tanhshrink to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Tanhshrink to learn * about the exact behavior of this module. */ @Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TanhshrinkImpl extends TanhshrinkImplCloneable { diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TanhshrinkImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TanhshrinkImplCloneable.java index 59023b4125e..2bfb1edabfa 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TanhshrinkImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TanhshrinkImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class TanhshrinkImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Tensor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Tensor.java index 47faee6e12e..9f75d08f765 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Tensor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Tensor.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -55,9 +56,9 @@ public class Tensor extends TensorBase { // This constructor should not be used by end users and is an implementation // detail invoked by autogenerated code. public Tensor( - @ByVal TensorImplPtr tensor_impl) { super((Pointer)null); allocate(tensor_impl); } + @IntrusivePtr("c10::TensorImpl,c10::UndefinedTensorImpl") @Cast({"", "c10::intrusive_ptr&"}) TensorImpl tensor_impl) { super((Pointer)null); allocate(tensor_impl); } private native void allocate( - @ByVal TensorImplPtr tensor_impl); + @IntrusivePtr("c10::TensorImpl,c10::UndefinedTensorImpl") @Cast({"", "c10::intrusive_ptr&"}) TensorImpl tensor_impl); public Tensor(@Const @ByRef Tensor tensor) { super((Pointer)null); allocate(tensor); } private native void allocate(@Const @ByRef Tensor tensor); @@ -69,7 +70,7 @@ private native void allocate( // Creates a new wrapper from TensorImpl. Intentionally a free method because // it should be used with care. Checks necessary invariants public static native @ByVal Tensor wrap_tensor_impl( - @ByVal TensorImplPtr tensor_impl); + @IntrusivePtr("c10::TensorImpl,c10::UndefinedTensorImpl") @Cast({"", "c10::intrusive_ptr&"}) TensorImpl tensor_impl); public native @ByVal Tensor contiguous(MemoryFormat memory_format/*=c10::MemoryFormat::Contiguous*/); public native @ByVal Tensor contiguous(); @@ -258,7 +259,7 @@ private native void allocate( * // f requires grad, has no operation creating it * }

- * \fn void backward(const Tensor & gradient={}, c10::optional retain_graph=c10::nullopt, bool create_graph=false, c10::optional inputs=c10::nullopt) const; + * \fn void backward(const Tensor & gradient={}, std::optional retain_graph=c10::nullopt, bool create_graph=false, std::optional inputs=c10::nullopt) const; * * Computes the gradient of current tensor with respect to graph leaves. * @@ -298,7 +299,7 @@ private native void allocate( /// /// /// - public native void backward(@Const @ByRef(nullValue = "at::Tensor{}") Tensor gradient, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional retain_graph, @Cast("bool") boolean create_graph/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") TensorArrayRefOptional inputs); + public native void backward(@Const @ByRef(nullValue = "at::Tensor{}") Tensor gradient, @ByVal(nullValue = "std::optional(c10::nullopt)") BoolOptional retain_graph, @Cast("bool") boolean create_graph/*=false*/, @ByVal(nullValue = "std::optional(c10::nullopt)") TensorArrayRefOptional inputs); public native void backward(); /** \fn Tensor detach() const; @@ -353,9 +354,9 @@ private native void allocate( //example //Tensor * add(Tensor & b); - public native void __dispatch__backward(@ByVal TensorArrayRef inputs, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional gradient, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional retain_graph, @Cast("bool") boolean create_graph/*=false*/); + public native void __dispatch__backward(@ByVal TensorArrayRef inputs, @Const @ByRef(nullValue = "std::optional{}") TensorOptional gradient, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional retain_graph, @Cast("bool") boolean create_graph/*=false*/); public native void __dispatch__backward(@ByVal TensorArrayRef inputs); - public native void __dispatch__backward(@ByVal TensorVector inputs, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional gradient, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional retain_graph, @Cast("bool") boolean create_graph/*=false*/); + public native void __dispatch__backward(@ByVal TensorVector inputs, @Const @ByRef(nullValue = "std::optional{}") TensorOptional gradient, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional retain_graph, @Cast("bool") boolean create_graph/*=false*/); public native void __dispatch__backward(@ByVal TensorVector inputs); public native void __dispatch_set_data(@Const @ByRef Tensor new_data); public native @ByVal Tensor __dispatch_data(); @@ -383,7 +384,7 @@ private native void allocate( public native @ByVal Tensor angle(); public native @ByVal Tensor sgn(); public native @ByRef Tensor sgn_(); - public native @ByVal Tensor chalf(@ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); + public native @ByVal Tensor chalf(@ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); public native @ByVal Tensor chalf(); public native @ByVal Tensor _conj(); public native @ByVal Tensor __dispatch_conj(); @@ -433,9 +434,9 @@ private native void allocate( public native @ByVal Tensor any(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dim); public native @ByVal Tensor any(@ByVal Dimname dim, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor any(@ByVal Dimname dim); - public native @ByVal Tensor argmax(@ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); + public native @ByVal Tensor argmax(@ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor argmax(); - public native @ByVal Tensor argmin(@ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); + public native @ByVal Tensor argmin(@ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor argmin(); public native @ByVal Tensor acosh(); public native @ByRef Tensor acosh_(); @@ -449,17 +450,17 @@ private native void allocate( public native @ByRef Tensor atanh_(); public native @ByVal Tensor arctanh(); public native @ByRef Tensor arctanh_(); - public native @ByVal Tensor as_strided(@ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); + public native @ByVal Tensor as_strided(@ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); public native @ByVal Tensor as_strided(@ByVal LongArrayRef size, @ByVal LongArrayRef stride); - public native @ByVal Tensor as_strided(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); + public native @ByVal Tensor as_strided(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); public native @ByVal Tensor as_strided(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... stride); - public native @ByVal Tensor as_strided_symint(@ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional storage_offset); + public native @ByVal Tensor as_strided_symint(@ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional storage_offset); public native @ByVal Tensor as_strided_symint(@ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride); - public native @Const @ByRef Tensor as_strided_(@ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); + public native @Const @ByRef Tensor as_strided_(@ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); public native @Const @ByRef Tensor as_strided_(@ByVal LongArrayRef size, @ByVal LongArrayRef stride); - public native @Const @ByRef Tensor as_strided_(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); + public native @Const @ByRef Tensor as_strided_(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); public native @Const @ByRef Tensor as_strided_(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... stride); - public native @Const @ByRef Tensor as_strided__symint(@ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional storage_offset); + public native @Const @ByRef Tensor as_strided__symint(@ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional storage_offset); public native @Const @ByRef Tensor as_strided__symint(@ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride); public native @ByVal Tensor asin(); public native @ByRef Tensor asin_(); @@ -473,15 +474,15 @@ private native void allocate( public native @ByVal Tensor baddbmm(@Const @ByRef Tensor batch1, @Const @ByRef Tensor batch2); public native @ByRef Tensor baddbmm_(@Const @ByRef Tensor batch1, @Const @ByRef Tensor batch2, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar beta, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar alpha); public native @ByRef Tensor baddbmm_(@Const @ByRef Tensor batch1, @Const @ByRef Tensor batch2); - public native @ByVal Tensor bernoulli(@ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByVal Tensor bernoulli(@ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByVal Tensor bernoulli(); - public native @ByRef Tensor bernoulli_(@Const @ByRef Tensor p, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByRef Tensor bernoulli_(@Const @ByRef Tensor p, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByRef Tensor bernoulli_(@Const @ByRef Tensor p); - public native @ByRef Tensor bernoulli_(double p/*=0.5*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByRef Tensor bernoulli_(double p/*=0.5*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByRef Tensor bernoulli_(); - public native @ByVal Tensor bernoulli(double p, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByVal Tensor bernoulli(double p, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByVal Tensor bernoulli(double p); - public native @ByVal Tensor bincount(@Const @ByRef(nullValue = "c10::optional{}") TensorOptional weights, @Cast("int64_t") long minlength/*=0*/); + public native @ByVal Tensor bincount(@Const @ByRef(nullValue = "std::optional{}") TensorOptional weights, @Cast("int64_t") long minlength/*=0*/); public native @ByVal Tensor bincount(); public native @ByVal Tensor bitwise_not(); public native @ByRef Tensor bitwise_not_(); @@ -520,13 +521,13 @@ private native void allocate( public native @ByVal TensorVector tensor_split_symint(@ByVal SymIntArrayRef indices); public native @ByVal TensorVector tensor_split(@Const @ByRef Tensor tensor_indices_or_sections, @Cast("int64_t") long dim/*=0*/); public native @ByVal TensorVector tensor_split(@Const @ByRef Tensor tensor_indices_or_sections); - public native @ByVal Tensor clamp(@Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional max); + public native @ByVal Tensor clamp(@Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional max); public native @ByVal Tensor clamp(@Const @ByRef ScalarOptional min); - public native @ByVal Tensor clamp(@Const @ByRef(nullValue = "c10::optional{}") TensorOptional min, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional max); + public native @ByVal Tensor clamp(@Const @ByRef(nullValue = "std::optional{}") TensorOptional min, @Const @ByRef(nullValue = "std::optional{}") TensorOptional max); public native @ByVal Tensor clamp(); - public native @ByRef Tensor clamp_(@Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional max); + public native @ByRef Tensor clamp_(@Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional max); public native @ByRef Tensor clamp_(@Const @ByRef ScalarOptional min); - public native @ByRef Tensor clamp_(@Const @ByRef(nullValue = "c10::optional{}") TensorOptional min, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional max); + public native @ByRef Tensor clamp_(@Const @ByRef(nullValue = "std::optional{}") TensorOptional min, @Const @ByRef(nullValue = "std::optional{}") TensorOptional max); public native @ByRef Tensor clamp_(); public native @ByVal Tensor clamp_max(@Const @ByRef Scalar max); public native @ByVal Tensor clamp_max(@Const @ByRef Tensor max); @@ -536,13 +537,13 @@ private native void allocate( public native @ByVal Tensor clamp_min(@Const @ByRef Tensor min); public native @ByRef Tensor clamp_min_(@Const @ByRef Scalar min); public native @ByRef Tensor clamp_min_(@Const @ByRef Tensor min); - public native @ByVal Tensor clip(@Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional max); + public native @ByVal Tensor clip(@Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional max); public native @ByVal Tensor clip(@Const @ByRef ScalarOptional min); - public native @ByVal Tensor clip(@Const @ByRef(nullValue = "c10::optional{}") TensorOptional min, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional max); + public native @ByVal Tensor clip(@Const @ByRef(nullValue = "std::optional{}") TensorOptional min, @Const @ByRef(nullValue = "std::optional{}") TensorOptional max); public native @ByVal Tensor clip(); - public native @ByRef Tensor clip_(@Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional max); + public native @ByRef Tensor clip_(@Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional max); public native @ByRef Tensor clip_(@Const @ByRef ScalarOptional min); - public native @ByRef Tensor clip_(@Const @ByRef(nullValue = "c10::optional{}") TensorOptional min, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional max); + public native @ByRef Tensor clip_(@Const @ByRef(nullValue = "std::optional{}") TensorOptional min, @Const @ByRef(nullValue = "std::optional{}") TensorOptional max); public native @ByRef Tensor clip_(); public native @ByVal Tensor __dispatch_contiguous(@ByVal(nullValue = "at::MemoryFormat(c10::MemoryFormat::Contiguous)") MemoryFormat memory_format); public native @ByVal Tensor __dispatch_contiguous(); @@ -554,30 +555,30 @@ private native void allocate( public native @ByRef Tensor cosh_(); public native @ByVal Tensor count_nonzero(@ByVal LongArrayRef dim); public native @ByVal Tensor count_nonzero(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dim); - public native @ByVal Tensor count_nonzero(@ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim); + public native @ByVal Tensor count_nonzero(@ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim); public native @ByVal Tensor count_nonzero(); - public native @ByVal Tensor cov(@Cast("int64_t") long correction/*=1*/, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional fweights, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional aweights); + public native @ByVal Tensor cov(@Cast("int64_t") long correction/*=1*/, @Const @ByRef(nullValue = "std::optional{}") TensorOptional fweights, @Const @ByRef(nullValue = "std::optional{}") TensorOptional aweights); public native @ByVal Tensor cov(); public native @ByVal Tensor corrcoef(); public native @ByVal T_TensorTensor_T cummax(@Cast("int64_t") long dim); public native @ByVal T_TensorTensor_T cummax(@ByVal Dimname dim); public native @ByVal T_TensorTensor_T cummin(@Cast("int64_t") long dim); public native @ByVal T_TensorTensor_T cummin(@ByVal Dimname dim); - public native @ByVal Tensor cumprod(@Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor cumprod(@Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor cumprod(@Cast("int64_t") long dim); - public native @ByRef Tensor cumprod_(@Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByRef Tensor cumprod_(@Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByRef Tensor cumprod_(@Cast("int64_t") long dim); - public native @ByVal Tensor cumprod(@ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor cumprod(@ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor cumprod(@ByVal Dimname dim); - public native @ByRef Tensor cumprod_(@ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByRef Tensor cumprod_(@ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByRef Tensor cumprod_(@ByVal Dimname dim); - public native @ByVal Tensor cumsum(@Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor cumsum(@Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor cumsum(@Cast("int64_t") long dim); - public native @ByRef Tensor cumsum_(@Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByRef Tensor cumsum_(@Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByRef Tensor cumsum_(@Cast("int64_t") long dim); - public native @ByVal Tensor cumsum(@ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor cumsum(@ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor cumsum(@ByVal Dimname dim); - public native @ByRef Tensor cumsum_(@ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByRef Tensor cumsum_(@ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByRef Tensor cumsum_(@ByVal Dimname dim); public native @ByVal Tensor diag_embed(@Cast("int64_t") long offset/*=0*/, @Cast("int64_t") long dim1/*=-2*/, @Cast("int64_t") long dim2/*=-1*/); public native @ByVal Tensor diag_embed(); @@ -589,7 +590,7 @@ private native void allocate( public native @ByVal Tensor diagonal(@ByVal Dimname outdim, @ByVal Dimname dim1, @ByVal Dimname dim2); public native @ByRef Tensor fill_diagonal_(@Const @ByRef Scalar fill_value, @Cast("bool") boolean wrap/*=false*/); public native @ByRef Tensor fill_diagonal_(@Const @ByRef Scalar fill_value); - public native @ByVal Tensor diff(@Cast("int64_t") long n/*=1*/, @Cast("int64_t") long dim/*=-1*/, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional prepend, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional append); + public native @ByVal Tensor diff(@Cast("int64_t") long n/*=1*/, @Cast("int64_t") long dim/*=-1*/, @Const @ByRef(nullValue = "std::optional{}") TensorOptional prepend, @Const @ByRef(nullValue = "std::optional{}") TensorOptional append); public native @ByVal Tensor diff(); public native @ByVal Tensor div(@Const @ByRef Tensor other); public native @ByRef Tensor div_(@Const @ByRef Tensor other); @@ -658,11 +659,11 @@ private native void allocate( public native @ByVal Tensor new_ones_symint(@ByVal SymIntArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); public native @ByVal Tensor new_ones_symint(@ByVal SymIntArrayRef size); public native @ByVal Tensor new_ones_symint(@ByVal SymIntArrayRef size, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory); - public native @Const @ByRef Tensor resize_(@ByVal LongArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); + public native @Const @ByRef Tensor resize_(@ByVal LongArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); public native @Const @ByRef Tensor resize_(@ByVal LongArrayRef size); - public native @Const @ByRef Tensor resize_(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); + public native @Const @ByRef Tensor resize_(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); public native @Const @ByRef Tensor resize_(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); - public native @Const @ByRef Tensor resize__symint(@ByVal SymIntArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); + public native @Const @ByRef Tensor resize__symint(@ByVal SymIntArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); public native @Const @ByRef Tensor resize__symint(@ByVal SymIntArrayRef size); public native @ByVal Tensor erf(); public native @ByRef Tensor erf_(); @@ -736,9 +737,9 @@ private native void allocate( public native @ByVal T_TensorTensor_T kthvalue(@Cast("int64_t") long k); public native @ByVal T_TensorTensor_T kthvalue(@Cast("int64_t") long k, @ByVal Dimname dim, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal T_TensorTensor_T kthvalue(@Cast("int64_t") long k, @ByVal Dimname dim); - public native @ByVal Tensor nan_to_num(@ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional nan, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional posinf, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional neginf); + public native @ByVal Tensor nan_to_num(@ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional nan, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional posinf, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional neginf); public native @ByVal Tensor nan_to_num(); - public native @ByRef Tensor nan_to_num_(@ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional nan, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional posinf, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional neginf); + public native @ByRef Tensor nan_to_num_(@ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional nan, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional posinf, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional neginf); public native @ByRef Tensor nan_to_num_(); public native @ByVal Tensor ldexp(@Const @ByRef Tensor other); public native @ByRef Tensor ldexp_(@Const @ByRef Tensor other); @@ -756,9 +757,9 @@ private native void allocate( public native @ByVal Tensor xlogy(@Const @ByRef Scalar other); public native @ByRef Tensor xlogy_(@Const @ByRef Tensor other); public native @ByRef Tensor xlogy_(@Const @ByRef Scalar other); - public native @ByVal Tensor log_softmax(@Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor log_softmax(@Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor log_softmax(@Cast("int64_t") long dim); - public native @ByVal Tensor log_softmax(@ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor log_softmax(@ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor log_softmax(@ByVal Dimname dim); public native @ByVal Tensor logcumsumexp(@Cast("int64_t") long dim); public native @ByVal Tensor logcumsumexp(@ByVal Dimname dim); @@ -773,7 +774,7 @@ private native void allocate( public native @ByVal Tensor matmul(@Const @ByRef Tensor other); public native @ByVal Tensor matrix_power(@Cast("int64_t") long n); public native @ByVal Tensor matrix_exp(); - public native @ByVal T_TensorTensor_T aminmax(@ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); + public native @ByVal T_TensorTensor_T aminmax(@ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal T_TensorTensor_T aminmax(); public native @ByVal T_TensorTensor_T max(@Cast("int64_t") long dim, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal T_TensorTensor_T max(@Cast("int64_t") long dim); @@ -782,19 +783,19 @@ private native void allocate( public native @ByVal Tensor amax(@ByVal(nullValue = "at::IntArrayRef{}") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor amax(); public native @ByVal Tensor amax(@ByVal(nullValue = "at::IntArrayRef{}") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/); - public native @ByVal Tensor mean(@ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor mean(@ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor mean(); - public native @ByVal Tensor mean(@ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor mean(@ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor mean(@ByVal LongArrayRefOptional dim); - public native @ByVal Tensor mean(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor mean(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor mean(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dim); - public native @ByVal Tensor mean(@ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor mean(@ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor mean(@ByVal DimnameArrayRef dim); - public native @ByVal Tensor mean(@ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor mean(@ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor mean(@ByVal DimnameVector dim); - public native @ByVal Tensor nanmean(@ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor nanmean(@ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor nanmean(); - public native @ByVal Tensor nanmean(@ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor nanmean(@ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor median(); public native @ByVal T_TensorTensor_T median(@Cast("int64_t") long dim, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal T_TensorTensor_T median(@Cast("int64_t") long dim); @@ -847,9 +848,9 @@ private native void allocate( public native @ByVal Tensor mT(); public native @ByVal Tensor mH(); public native @ByVal Tensor adjoint(); - public native @Cast("bool") boolean is_pinned(@ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + public native @Cast("bool") boolean is_pinned(@ByVal(nullValue = "std::optional(::std::nullopt)") DeviceOptional device); public native @Cast("bool") boolean is_pinned(); - public native @ByVal Tensor pin_memory(@ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + public native @ByVal Tensor pin_memory(@ByVal(nullValue = "std::optional(::std::nullopt)") DeviceOptional device); public native @ByVal Tensor pin_memory(); public native @ByVal Tensor pinverse(double rcond/*=1e-15*/); public native @ByVal Tensor pinverse(); @@ -867,13 +868,13 @@ private native void allocate( public native @ByVal Tensor repeat(@ByVal LongArrayRef repeats); public native @ByVal Tensor repeat(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... repeats); public native @ByVal Tensor repeat_symint(@ByVal SymIntArrayRef repeats); - public native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional output_size); + public native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional output_size); public native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor repeats); - public native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional output_size); + public native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional output_size); public native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor repeats); - public native @ByVal Tensor repeat_interleave(@Cast("int64_t") long repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional output_size); + public native @ByVal Tensor repeat_interleave(@Cast("int64_t") long repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional output_size); public native @ByVal Tensor repeat_interleave(@Cast("int64_t") long repeats); - public native @ByVal Tensor repeat_interleave_symint(@ByVal SymInt repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional output_size); + public native @ByVal Tensor repeat_interleave_symint(@ByVal SymInt repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional output_size); public native @ByVal Tensor repeat_interleave_symint(@ByVal SymInt repeats); public native @ByVal Tensor reshape(@ByVal LongArrayRef shape); public native @ByVal Tensor reshape(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... shape); @@ -899,9 +900,9 @@ private native void allocate( public native @ByVal Tensor select_symint(@Cast("int64_t") long dim, @ByVal SymInt index); public native @ByVal Tensor sigmoid(); public native @ByRef Tensor sigmoid_(); - public native @ByVal Tensor logit(@ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional eps); + public native @ByVal Tensor logit(@ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional eps); public native @ByVal Tensor logit(); - public native @ByRef Tensor logit_(@ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional eps); + public native @ByRef Tensor logit_(@ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional eps); public native @ByRef Tensor logit_(); public native @ByVal Tensor sin(); public native @ByRef Tensor sin_(); @@ -912,32 +913,32 @@ private native void allocate( public native @ByVal Tensor detach(); public native @ByRef Tensor detach_(); public native @Cast("int64_t") long size(@ByVal Dimname dim); - public native @ByVal Tensor slice(@Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); + public native @ByVal Tensor slice(@Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); public native @ByVal Tensor slice(); - public native @ByVal Tensor slice_symint(@Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); + public native @ByVal Tensor slice_symint(@Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); public native @ByVal Tensor slice_symint(); - public native @ByVal Tensor slice_inverse(@Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); + public native @ByVal Tensor slice_inverse(@Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); public native @ByVal Tensor slice_inverse(@Const @ByRef Tensor src); - public native @ByVal Tensor slice_inverse_symint(@Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); + public native @ByVal Tensor slice_inverse_symint(@Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); public native @ByVal Tensor slice_inverse_symint(@Const @ByRef Tensor src); - public native @ByVal Tensor slice_scatter(@Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); + public native @ByVal Tensor slice_scatter(@Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); public native @ByVal Tensor slice_scatter(@Const @ByRef Tensor src); - public native @ByVal Tensor slice_scatter_symint(@Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); + public native @ByVal Tensor slice_scatter_symint(@Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); public native @ByVal Tensor slice_scatter_symint(@Const @ByRef Tensor src); public native @ByVal Tensor select_scatter(@Const @ByRef Tensor src, @Cast("int64_t") long dim, @Cast("int64_t") long index); public native @ByVal Tensor select_scatter_symint(@Const @ByRef Tensor src, @Cast("int64_t") long dim, @ByVal SymInt index); public native @ByVal Tensor diagonal_scatter(@Const @ByRef Tensor src, @Cast("int64_t") long offset/*=0*/, @Cast("int64_t") long dim1/*=0*/, @Cast("int64_t") long dim2/*=1*/); public native @ByVal Tensor diagonal_scatter(@Const @ByRef Tensor src); - public native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor src, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); + public native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor src, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); public native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor src, @ByVal LongArrayRef size, @ByVal LongArrayRef stride); - public native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor src, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); + public native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor src, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); public native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor src, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... stride); - public native @ByVal Tensor as_strided_scatter_symint(@Const @ByRef Tensor src, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional storage_offset); + public native @ByVal Tensor as_strided_scatter_symint(@Const @ByRef Tensor src, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional storage_offset); public native @ByVal Tensor as_strided_scatter_symint(@Const @ByRef Tensor src, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride); public native @ByVal Tensor smm(@Const @ByRef Tensor mat2); - public native @ByVal Tensor softmax(@Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor softmax(@Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor softmax(@Cast("int64_t") long dim); - public native @ByVal Tensor softmax(@ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor softmax(@ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor softmax(@ByVal Dimname dim); public native @ByVal TensorVector unsafe_split(@Cast("int64_t") long split_size, @Cast("int64_t") long dim/*=0*/); public native @ByVal TensorVector unsafe_split(@Cast("int64_t") long split_size); @@ -986,25 +987,25 @@ private native void allocate( public native @ByRef Tensor squeeze_(@ByVal Dimname dim); public native @ByVal Tensor sspaddmm(@Const @ByRef Tensor mat1, @Const @ByRef Tensor mat2, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar beta, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar alpha); public native @ByVal Tensor sspaddmm(@Const @ByRef Tensor mat1, @Const @ByRef Tensor mat2); - public native @ByVal Tensor stft(@Cast("int64_t") long n_fft, @ByVal LongOptional hop_length, @ByVal LongOptional win_length, @Const @ByRef TensorOptional window, @Cast("bool") boolean normalized, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional onesided, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional return_complex); - public native @ByVal Tensor stft(@Cast("int64_t") long n_fft, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional hop_length, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @StringView BytePointer pad_mode/*="reflect"*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional onesided, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional return_complex); - public native @ByVal Tensor stft(@Cast("int64_t") long n_fft, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional hop_length, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @StringView String pad_mode/*="reflect"*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional onesided, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional return_complex); - public native @ByVal Tensor istft(@Cast("int64_t") long n_fft, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional hop_length, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional onesided, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional length, @Cast("bool") boolean return_complex/*=false*/); + public native @ByVal Tensor stft(@Cast("int64_t") long n_fft, @ByVal LongOptional hop_length, @ByVal LongOptional win_length, @Const @ByRef TensorOptional window, @Cast("bool") boolean normalized, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional onesided, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional return_complex); + public native @ByVal Tensor stft(@Cast("int64_t") long n_fft, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional hop_length, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "std::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @StringView BytePointer pad_mode/*="reflect"*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional onesided, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional return_complex); + public native @ByVal Tensor stft(@Cast("int64_t") long n_fft, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional hop_length, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "std::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @StringView String pad_mode/*="reflect"*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional onesided, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional return_complex); + public native @ByVal Tensor istft(@Cast("int64_t") long n_fft, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional hop_length, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "std::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional onesided, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional length, @Cast("bool") boolean return_complex/*=false*/); public native @ByVal Tensor istft(@Cast("int64_t") long n_fft); public native @Cast("int64_t") long stride(@ByVal Dimname dim); - public native @ByVal Tensor sum(@ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor sum(@ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor sum(); - public native @ByVal Tensor sum(@ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor sum(@ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor sum(@ByVal LongArrayRefOptional dim); - public native @ByVal Tensor sum(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor sum(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor sum(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dim); - public native @ByVal Tensor sum(@ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor sum(@ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor sum(@ByVal DimnameArrayRef dim); - public native @ByVal Tensor sum(@ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor sum(@ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor sum(@ByVal DimnameVector dim); - public native @ByVal Tensor nansum(@ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor nansum(@ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor nansum(); - public native @ByVal Tensor nansum(@ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor nansum(@ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor sum_to_size(@ByVal LongArrayRef size); public native @ByVal Tensor sum_to_size(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); public native @ByVal Tensor sum_to_size_symint(@ByVal SymIntArrayRef size); @@ -1017,22 +1018,22 @@ private native void allocate( public native @ByVal Tensor std(@ByVal LongArrayRefOptional dim, @Cast("bool") boolean unbiased); public native @ByVal Tensor std(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor std(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean unbiased); - public native @ByVal Tensor std(@ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); + public native @ByVal Tensor std(@ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor std(); - public native @ByVal Tensor std(@ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); + public native @ByVal Tensor std(@ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor std(@ByVal DimnameArrayRef dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor std(@ByVal DimnameArrayRef dim, @Cast("bool") boolean unbiased); public native @ByVal Tensor std(@ByVal DimnameVector dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor std(@ByVal DimnameVector dim, @Cast("bool") boolean unbiased); - public native @ByVal Tensor std(@ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); + public native @ByVal Tensor std(@ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor std(@ByVal DimnameArrayRef dim); - public native @ByVal Tensor std(@ByVal DimnameVector dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); + public native @ByVal Tensor std(@ByVal DimnameVector dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor std(@ByVal DimnameVector dim); - public native @ByVal Tensor prod(@ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor prod(@ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor prod(); - public native @ByVal Tensor prod(@Cast("int64_t") long dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor prod(@Cast("int64_t") long dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor prod(@Cast("int64_t") long dim); - public native @ByVal Tensor prod(@ByVal Dimname dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor prod(@ByVal Dimname dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor prod(@ByVal Dimname dim); public native @ByVal Tensor t(); public native @ByRef Tensor t_(); @@ -1075,16 +1076,16 @@ private native void allocate( public native @ByVal Tensor var(@ByVal LongArrayRefOptional dim, @Cast("bool") boolean unbiased); public native @ByVal Tensor var(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor var(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean unbiased); - public native @ByVal Tensor var(@ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); + public native @ByVal Tensor var(@ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor var(); - public native @ByVal Tensor var(@ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); + public native @ByVal Tensor var(@ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor var(@ByVal DimnameArrayRef dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor var(@ByVal DimnameArrayRef dim, @Cast("bool") boolean unbiased); public native @ByVal Tensor var(@ByVal DimnameVector dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor var(@ByVal DimnameVector dim, @Cast("bool") boolean unbiased); - public native @ByVal Tensor var(@ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); + public native @ByVal Tensor var(@ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor var(@ByVal DimnameArrayRef dim); - public native @ByVal Tensor var(@ByVal DimnameVector dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); + public native @ByVal Tensor var(@ByVal DimnameVector dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor var(@ByVal DimnameVector dim); public native @ByVal Tensor view_as(@Const @ByRef Tensor other); public native @ByVal Tensor where(@Const @ByRef Tensor condition, @Const @ByRef Tensor other); @@ -1105,10 +1106,10 @@ private native void allocate( public native @ByVal Tensor norm(@Const @ByRef ScalarOptional p, @ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/); public native @ByVal Tensor norm(@Const @ByRef ScalarOptional p, @ByVal DimnameVector dim); public native @ByVal T_TensorTensor_T frexp(); - public native @ByVal Tensor clone(@ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); + public native @ByVal Tensor clone(@ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); public native @ByVal Tensor clone(); public native @ByVal Tensor positive(); - public native @Const @ByRef Tensor resize_as_(@Const @ByRef Tensor the_template, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); + public native @Const @ByRef Tensor resize_as_(@Const @ByRef Tensor the_template, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); public native @Const @ByRef Tensor resize_as_(@Const @ByRef Tensor the_template); public native @Const @ByRef Tensor resize_as_sparse_(@Const @ByRef Tensor the_template); public native @ByRef Tensor zero_(); @@ -1143,9 +1144,9 @@ private native void allocate( public native @ByVal Tensor sparse_mask(@Const @ByRef Tensor mask); public native @ByVal Tensor _sparse_mask_projection(@Const @ByRef Tensor mask, @Cast("bool") boolean accumulate_matches/*=false*/); public native @ByVal Tensor _sparse_mask_projection(@Const @ByRef Tensor mask); - public native @ByVal Tensor to_dense(@ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional masked_grad); + public native @ByVal Tensor to_dense(@ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional masked_grad); public native @ByVal Tensor to_dense(); - public native @ByVal Tensor _to_dense(@ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional masked_grad); + public native @ByVal Tensor _to_dense(@ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional masked_grad); public native @ByVal Tensor _to_dense(); public native @Cast("int64_t") long sparse_dim(); public native @Cast("int64_t") long _dimI(); @@ -1168,37 +1169,37 @@ private native void allocate( public native @ByVal TensorVector unbind(@ByVal Dimname dim); public native @ByVal Tensor to_sparse(@Cast("int64_t") long sparse_dim); public native @ByVal Tensor _to_sparse(@Cast("int64_t") long sparse_dim); - public native @ByVal Tensor to_sparse(@ByVal(nullValue = "c10::optional(c10::nullopt)") LayoutOptional layout, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor to_sparse(@ByVal(nullValue = "std::optional(::std::nullopt)") LayoutOptional layout, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor to_sparse(); - public native @ByVal Tensor to_sparse(@ByVal(nullValue = "c10::optional(c10::nullopt)") LayoutOptional layout, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); - public native @ByVal Tensor _to_sparse(@ByVal(nullValue = "c10::optional(c10::nullopt)") LayoutOptional layout, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor to_sparse(@ByVal(nullValue = "std::optional(::std::nullopt)") LayoutOptional layout, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor _to_sparse(@ByVal(nullValue = "std::optional(::std::nullopt)") LayoutOptional layout, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor _to_sparse(); - public native @ByVal Tensor _to_sparse(@ByVal(nullValue = "c10::optional(c10::nullopt)") LayoutOptional layout, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); - public native @ByVal Tensor to_sparse_csr(@ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor _to_sparse(@ByVal(nullValue = "std::optional(::std::nullopt)") LayoutOptional layout, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor to_sparse_csr(@ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor to_sparse_csr(); - public native @ByVal Tensor _to_sparse_csr(@ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor _to_sparse_csr(@ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor _to_sparse_csr(); - public native @ByVal Tensor to_sparse_csc(@ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor to_sparse_csc(@ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor to_sparse_csc(); - public native @ByVal Tensor _to_sparse_csc(@ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor _to_sparse_csc(@ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor _to_sparse_csc(); - public native @ByVal Tensor to_sparse_bsr(@ByVal LongArrayRef blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor to_sparse_bsr(@ByVal LongArrayRef blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor to_sparse_bsr(@ByVal LongArrayRef blocksize); - public native @ByVal Tensor to_sparse_bsr(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor to_sparse_bsr(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor to_sparse_bsr(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... blocksize); - public native @ByVal Tensor _to_sparse_bsr(@ByVal LongArrayRef blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor _to_sparse_bsr(@ByVal LongArrayRef blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor _to_sparse_bsr(@ByVal LongArrayRef blocksize); - public native @ByVal Tensor _to_sparse_bsr(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor _to_sparse_bsr(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor _to_sparse_bsr(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... blocksize); - public native @ByVal Tensor to_sparse_bsc(@ByVal LongArrayRef blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor to_sparse_bsc(@ByVal LongArrayRef blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor to_sparse_bsc(@ByVal LongArrayRef blocksize); - public native @ByVal Tensor to_sparse_bsc(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor to_sparse_bsc(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor to_sparse_bsc(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... blocksize); - public native @ByVal Tensor _to_sparse_bsc(@ByVal LongArrayRef blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor _to_sparse_bsc(@ByVal LongArrayRef blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor _to_sparse_bsc(@ByVal LongArrayRef blocksize); - public native @ByVal Tensor _to_sparse_bsc(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] blocksize, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dense_dim); + public native @ByVal Tensor _to_sparse_bsc(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] blocksize, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dense_dim); public native @ByVal Tensor _to_sparse_bsc(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... blocksize); - public native @ByVal Tensor to_mkldnn(@ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); + public native @ByVal Tensor to_mkldnn(@ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); public native @ByVal Tensor to_mkldnn(); public native @ByVal Tensor dequantize(); public native double q_scale(); @@ -1210,14 +1211,14 @@ private native void allocate( public native @ByVal QScheme qscheme(); public native @ByVal Tensor _autocast_to_reduced_precision(@Cast("bool") boolean cuda_enabled, @Cast("bool") boolean cpu_enabled, ScalarType cuda_dtype, ScalarType cpu_dtype); public native @ByVal Tensor _autocast_to_full_precision(@Cast("bool") boolean cuda_enabled, @Cast("bool") boolean cpu_enabled); - public native @ByVal Tensor to(@ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @Cast("bool") boolean non_blocking/*=false*/, @Cast("bool") boolean copy/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); + public native @ByVal Tensor to(@ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @Cast("bool") boolean non_blocking/*=false*/, @Cast("bool") boolean copy/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); public native @ByVal Tensor to(); public native @ByVal Tensor to(@ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @Cast("bool") boolean non_blocking, @Cast("bool") boolean copy, @ByVal MemoryFormatOptional memory_format); - public native @ByVal Tensor to(@ByVal Device device, ScalarType dtype, @Cast("bool") boolean non_blocking/*=false*/, @Cast("bool") boolean copy/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); + public native @ByVal Tensor to(@ByVal Device device, ScalarType dtype, @Cast("bool") boolean non_blocking/*=false*/, @Cast("bool") boolean copy/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); public native @ByVal Tensor to(@ByVal Device device, ScalarType dtype); - public native @ByVal Tensor to(ScalarType dtype, @Cast("bool") boolean non_blocking/*=false*/, @Cast("bool") boolean copy/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); + public native @ByVal Tensor to(ScalarType dtype, @Cast("bool") boolean non_blocking/*=false*/, @Cast("bool") boolean copy/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); public native @ByVal Tensor to(ScalarType dtype); - public native @ByVal Tensor to(@Const @ByRef Tensor other, @Cast("bool") boolean non_blocking/*=false*/, @Cast("bool") boolean copy/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); + public native @ByVal Tensor to(@Const @ByRef Tensor other, @Cast("bool") boolean non_blocking/*=false*/, @Cast("bool") boolean copy/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); public native @ByVal Tensor to(@Const @ByRef Tensor other); public native @ByVal Scalar item(); public native @ByRef Tensor set_(@ByVal Storage source); @@ -1350,25 +1351,25 @@ private native void allocate( public native @ByRef Tensor addbmm_(@Const @ByRef Tensor batch1, @Const @ByRef Tensor batch2); public native @ByVal Tensor addbmm(@Const @ByRef Tensor batch1, @Const @ByRef Tensor batch2, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar beta, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar alpha); public native @ByVal Tensor addbmm(@Const @ByRef Tensor batch1, @Const @ByRef Tensor batch2); - public native @ByRef Tensor random_(@Cast("int64_t") long from, @ByVal LongOptional to, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByRef Tensor random_(@Cast("int64_t") long from, @ByVal LongOptional to, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByRef Tensor random_(@Cast("int64_t") long from, @ByVal LongOptional to); - public native @ByRef Tensor random_(@Cast("int64_t") long to, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByRef Tensor random_(@Cast("int64_t") long to, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByRef Tensor random_(@Cast("int64_t") long to); - public native @ByRef Tensor random_(@ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByRef Tensor random_(@ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByRef Tensor random_(); - public native @ByRef Tensor uniform_(double from/*=0*/, double to/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByRef Tensor uniform_(double from/*=0*/, double to/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByRef Tensor uniform_(); - public native @ByRef Tensor cauchy_(double median/*=0*/, double sigma/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByRef Tensor cauchy_(double median/*=0*/, double sigma/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByRef Tensor cauchy_(); - public native @ByRef Tensor log_normal_(double mean/*=1*/, double std/*=2*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByRef Tensor log_normal_(double mean/*=1*/, double std/*=2*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByRef Tensor log_normal_(); - public native @ByRef Tensor exponential_(double lambd/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByRef Tensor exponential_(double lambd/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByRef Tensor exponential_(); - public native @ByRef Tensor geometric_(double p, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByRef Tensor geometric_(double p, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByRef Tensor geometric_(double p); public native @ByVal Tensor diag(@Cast("int64_t") long diagonal/*=0*/); public native @ByVal Tensor diag(); - public native @ByVal Tensor cross(@Const @ByRef Tensor other, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim); + public native @ByVal Tensor cross(@Const @ByRef Tensor other, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim); public native @ByVal Tensor cross(@Const @ByRef Tensor other); public native @ByVal Tensor triu(@Cast("int64_t") long diagonal/*=0*/); public native @ByVal Tensor triu(); @@ -1418,7 +1419,7 @@ private native void allocate( public native @ByRef Tensor less_(@Const @ByRef Scalar other); public native @ByRef Tensor less_(@Const @ByRef Tensor other); public native @ByVal Tensor take(@Const @ByRef Tensor index); - public native @ByVal Tensor take_along_dim(@Const @ByRef Tensor indices, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim); + public native @ByVal Tensor take_along_dim(@Const @ByRef Tensor indices, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim); public native @ByVal Tensor take_along_dim(@Const @ByRef Tensor indices); public native @ByVal Tensor index_select(@Cast("int64_t") long dim, @Const @ByRef Tensor index); public native @ByVal Tensor index_select(@ByVal Dimname dim, @Const @ByRef Tensor index); @@ -1461,7 +1462,7 @@ private native void allocate( public native @ByVal Tensor ormqr(@Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Cast("bool") boolean left/*=true*/, @Cast("bool") boolean transpose/*=false*/); public native @ByVal Tensor ormqr(@Const @ByRef Tensor input2, @Const @ByRef Tensor input3); public native @ByVal Tensor lu_solve(@Const @ByRef Tensor LU_data, @Const @ByRef Tensor LU_pivots); - public native @ByVal Tensor multinomial(@Cast("int64_t") long num_samples, @Cast("bool") boolean replacement/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByVal Tensor multinomial(@Cast("int64_t") long num_samples, @Cast("bool") boolean replacement/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByVal Tensor multinomial(@Cast("int64_t") long num_samples); public native @ByRef Tensor lgamma_(); public native @ByVal Tensor lgamma(); @@ -1485,11 +1486,11 @@ private native void allocate( public native @ByVal Tensor lerp(@Const @ByRef Tensor end, @Const @ByRef Tensor weight); public native @ByVal Tensor histc(@Cast("int64_t") long bins/*=100*/, @Const @ByRef(nullValue = "at::Scalar(0)") Scalar min, @Const @ByRef(nullValue = "at::Scalar(0)") Scalar max); public native @ByVal Tensor histc(); - public native @ByVal T_TensorTensor_T histogram(@Const @ByRef Tensor bins, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); + public native @ByVal T_TensorTensor_T histogram(@Const @ByRef Tensor bins, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); public native @ByVal T_TensorTensor_T histogram(@Const @ByRef Tensor bins); - public native @ByVal T_TensorTensor_T histogram(@Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "c10::optional >(c10::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); + public native @ByVal T_TensorTensor_T histogram(@Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "std::optional >(::std::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); public native @ByVal T_TensorTensor_T histogram(); - public native @ByVal T_TensorTensor_T histogram(@Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "c10::optional >(c10::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); + public native @ByVal T_TensorTensor_T histogram(@Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "std::optional >(::std::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); public native @ByVal Tensor fmod(@Const @ByRef Scalar other); public native @ByRef Tensor fmod_(@Const @ByRef Scalar other); public native @ByVal Tensor fmod(@Const @ByRef Tensor other); @@ -1514,18 +1515,18 @@ private native void allocate( public native @ByVal Tensor max(@Const @ByRef Tensor other); public native @ByVal Tensor minimum(@Const @ByRef Tensor other); public native @ByVal Tensor min(@Const @ByRef Tensor other); - public native @ByVal Tensor quantile(@Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); + public native @ByVal Tensor quantile(@Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); public native @ByVal Tensor quantile(@Const @ByRef Tensor q); - public native @ByVal Tensor quantile(@Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); - public native @ByVal Tensor quantile(double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); + public native @ByVal Tensor quantile(@Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); + public native @ByVal Tensor quantile(double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); public native @ByVal Tensor quantile(double q); - public native @ByVal Tensor quantile(double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); - public native @ByVal Tensor nanquantile(@Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); + public native @ByVal Tensor quantile(double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); + public native @ByVal Tensor nanquantile(@Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); public native @ByVal Tensor nanquantile(@Const @ByRef Tensor q); - public native @ByVal Tensor nanquantile(@Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); - public native @ByVal Tensor nanquantile(double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); + public native @ByVal Tensor nanquantile(@Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); + public native @ByVal Tensor nanquantile(double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); public native @ByVal Tensor nanquantile(double q); - public native @ByVal Tensor nanquantile(double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); + public native @ByVal Tensor nanquantile(double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); public native @ByVal T_TensorTensor_T sort(@Cast("int64_t") long dim/*=-1*/, @Cast("bool") boolean descending/*=false*/); public native @ByVal T_TensorTensor_T sort(); public native @ByVal T_TensorTensor_T sort(@ByVal BoolOptional stable, @Cast("int64_t") long dim/*=-1*/, @Cast("bool") boolean descending/*=false*/); @@ -1559,7 +1560,7 @@ private native void allocate( public native @ByVal Tensor float_power(@Const @ByRef Scalar exponent); public native @ByRef Tensor float_power_(@Const @ByRef Scalar exponent); public native @ByRef Tensor float_power_(@Const @ByRef Tensor exponent); - public native @ByRef Tensor normal_(double mean/*=0*/, double std/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); + public native @ByRef Tensor normal_(double mean/*=0*/, double std/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); public native @ByRef Tensor normal_(); public native @ByVal Tensor alias(); public native @ByVal Tensor isfinite(); @@ -1574,10 +1575,10 @@ private native void allocate( public native @ByVal Tensor inner(@Const @ByRef Tensor other); public native @ByVal Tensor outer(@Const @ByRef Tensor vec2); public native @ByVal Tensor ger(@Const @ByRef Tensor vec2); - public native @ByVal Tensor to_padded_tensor(double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional output_size); + public native @ByVal Tensor to_padded_tensor(double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional output_size); public native @ByVal Tensor to_padded_tensor(double padding); - public native @ByVal Tensor to_padded_tensor(double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); - public native @ByVal Tensor to_padded_tensor_symint(double padding, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional output_size); + public native @ByVal Tensor to_padded_tensor(double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); + public native @ByVal Tensor to_padded_tensor_symint(double padding, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional output_size); public native @ByVal Tensor to_padded_tensor_symint(double padding); // Special C++ only overloads for std()-like functions (See gh-40287) diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArg.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArg.java index 4c973e312e4..e93cde22d48 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArg.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArg.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArgArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArgArrayRef.java index 4f0a43019d4..57d5e7f4b8e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArgArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArgArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArgs.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArgs.java new file mode 100644 index 00000000000..546c81829a4 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArgs.java @@ -0,0 +1,51 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("torch::dynamo::autograd") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class TensorArgs extends Pointer { + static { Loader.load(); } + /** Default native constructor. */ + public TensorArgs() { super((Pointer)null); allocate(); } + /** Native array allocator. Access with {@link Pointer#position(long)}. */ + public TensorArgs(long size) { super((Pointer)null); allocateArray(size); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public TensorArgs(Pointer p) { super(p); } + private native void allocate(); + private native void allocateArray(long size); + @Override public TensorArgs position(long position) { + return (TensorArgs)super.position(position); + } + @Override public TensorArgs getPointer(long i) { + return new TensorArgs((Pointer)this).offsetAddress(i); + } + + // Manages a collection of TensorArgs and mappings from Tensors/SavedVariables + // to them. This also allows us to unpack SavedVariable exactly once and + // store the unpacked Tensor. + + public native @ByRef DynamoTensorArg lookup(@Const @ByRef Tensor tensor, @Cast("bool") boolean create/*=false*/); + public native @ByRef DynamoTensorArg lookup(@Const @ByRef Tensor tensor); + + public native @ByRef DynamoTensorArg add(@Const @ByRef Tensor tensor); + + // the concrete tensors that will get passed into the graph as inputs + public native @ByRef @NoOffset TensorVector inputs(); public native TensorArgs inputs(TensorVector setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArrayRef.java index 83b64f68655..e961f41d2f0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArrayRefOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArrayRefOptional.java index 8aeb3fa6b28..45a7cfbe3e6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArrayRefOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorArrayRefOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TensorArrayRefOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBase.java index 86de38accf1..3256982ec8a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -62,16 +63,13 @@ public class TensorBase extends AbstractTensor { private native void allocate(); // This constructor should not be used by end users and is an implementation // detail invoked by autogenerated code. - public TensorBase( - @ByVal TensorImplPtr tensor_impl) { super((Pointer)null); allocate(tensor_impl); } - private native void allocate( - @ByVal TensorImplPtr tensor_impl); + public TensorBase(@Const @ByRef TensorBase arg0) { super((Pointer)null); allocate(arg0); } private native void allocate(@Const @ByRef TensorBase arg0); // Creates a new wrapper from TensorImpl. Intentionally a free method because // it should be used with care. Checks necessary invariants public static native @ByVal TensorBase wrap_tensor_impl( - @ByVal TensorImplPtr tensor_impl); + @IntrusivePtr("c10::TensorImpl,c10::UndefinedTensorImpl") @Cast({"", "c10::intrusive_ptr&"}) TensorImpl tensor_impl); public native @Cast("int64_t") long dim(); public native @Cast("int64_t") long storage_offset(); @@ -95,7 +93,7 @@ private native void allocate( public native @Const @ByRef TensorBase fill_(@Const @ByRef Scalar scalar); public native @Const @ByRef TensorBase zero_(); - public native @ByVal TensorBase to(@ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @Cast("bool") boolean non_blocking/*=false*/, @Cast("bool") boolean copy/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); + public native @ByVal TensorBase to(@ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @Cast("bool") boolean non_blocking/*=false*/, @Cast("bool") boolean copy/*=false*/, @ByVal(nullValue = "std::optional(c10::nullopt)") MemoryFormatOptional memory_format); public native @ByVal TensorBase to(); public native @Cast("bool") boolean is_complex(); @@ -114,9 +112,9 @@ private native void allocate( public native TensorImpl unsafeGetTensorImpl(); public native TensorImpl unsafeReleaseTensorImpl(); - public native @Const @ByRef TensorImplPtr getIntrusivePtr(); + public native @IntrusivePtr("c10::TensorImpl,c10::UndefinedTensorImpl") @Cast({"", "c10::intrusive_ptr&"}) TensorImpl getIntrusivePtr(); - public native @ByVal TensorImplPtr unsafeReleaseIntrusivePtr(); + public native @IntrusivePtr("c10::TensorImpl,c10::UndefinedTensorImpl") @Cast({"", "c10::intrusive_ptr&"}) TensorImpl unsafeReleaseIntrusivePtr(); public native @Cast("bool") boolean defined(); @@ -278,8 +276,8 @@ private native void allocate( /** Returns if a {@code Tensor} is mps tensor. */ public native @Cast("bool") boolean is_mps(); - /** Returns if a {@code Tensor} is ort tensor. */ - public native @Cast("bool") boolean is_ort(); + /** Returns if a {@code Tensor} is maia tensor. */ + public native @Cast("bool") boolean is_maia(); /** Returns if a {@code Tensor} is vulkan tensor. */ public native @Cast("bool") boolean is_vulkan(); @@ -302,7 +300,7 @@ private native void allocate( /** If a tensor is a quantized tensor, returns its quantizer * TODO: it's not in native_functions.yaml yet as it's not exposed to python */ - public native @ByVal QuantizerPtr quantizer(); + public native @IntrusivePtr("at::Quantizer") @Cast({"", "c10::intrusive_ptr&"}) Quantizer quantizer(); /** Returns if a {@code Tensor} has any dimension names */ public native @Cast("bool") boolean has_names(); @@ -398,7 +396,7 @@ private native void allocate( * // f requires grad, has no operation creating it * }

- * \fn void backward(const Tensor & gradient={}, c10::optional retain_graph=c10::nullopt, bool create_graph=false, c10::optional inputs=c10::nullopt) const; + * \fn void backward(const Tensor & gradient={}, std::optional retain_graph=c10::nullopt, bool create_graph=false, std::optional inputs=c10::nullopt) const; * * Computes the gradient of current tensor with respect to graph leaves. * diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBaseMaybeOwned.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBaseMaybeOwned.java index 9df03681b61..70a4d73def9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBaseMaybeOwned.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBaseMaybeOwned.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBatchDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBatchDataset.java index 08ee99a652e..ec88ce05d3a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBatchDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorBatchDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorCastValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorCastValue.java index bbec905ccc0..634c569fccc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorCastValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorCastValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDataset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDataset.java index 74d52d0aa40..70b7adf21d5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDataset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDataset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDatasetBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDatasetBase.java index 000da9038f5..b9f8c661716 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDatasetBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDatasetBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDeque.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDeque.java index a450ff89c0f..95abe1021cc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDeque.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorDeque.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorElementReference.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorElementReference.java index a7f7dab43f9..f69f803dd0e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorElementReference.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorElementReference.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ public class TensorElementReference extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public TensorElementReference(Pointer p) { super(p); } - public native @Name("operator std::conditional_t::type>::value,const at::Tensor&,at::Tensor>") @ByVal Tensor getTensor(); + public native @Name("operator std::conditional_t::type>,const at::Tensor&,at::Tensor>") @ByVal Tensor getTensor(); @@ -35,7 +36,7 @@ public class TensorElementReference extends Pointer { public native @Const @ByRef IValue get(); - private static native @Namespace void swap(@ByRef(true) TensorElementReference lhs, @ByRef(true) TensorElementReference rhs); + private static native @Namespace @NoException(true) void swap(@ByRef(true) TensorElementReference lhs, @ByRef(true) TensorElementReference rhs); public void swap(TensorElementReference rhs) { swap(this, rhs); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExample.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExample.java index 1530622a30d..92858053dcf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExample.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExample.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleCollation.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleCollation.java index cf37166d34d..c2a6a2928fd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleCollation.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleCollation.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleIterator.java index a8a53d0d9f2..89d6966496b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleOptional.java index 479a49e9a96..9c84b078dfd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TensorExampleOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVector.java index 00edc5b625a..453ba05d617 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVectorIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVectorIterator.java index 25ce7fb2a85..559741635cc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVectorIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVectorIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVectorOptional.java index a3c8ade0066..d0766ec1b2c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorExampleVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional > >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional > >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TensorExampleVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorGeometry.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorGeometry.java index 2fae9df1e9b..dc1f98be0c0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorGeometry.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorGeometry.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorGeometryArg.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorGeometryArg.java index 6bed3fe58ab..839cb79930d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorGeometryArg.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorGeometryArg.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImpl.java index f6b903ec3cd..86b53a1f5e7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -119,7 +120,7 @@ public TensorImpl( @ByRef(true) Storage storage, @ByVal DispatchKeySet arg1, @Const @ByVal TypeMeta data_type) { super((Pointer)null); allocate(storage, arg1, data_type); } - private native void allocate( + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( @ByRef(true) Storage storage, @ByVal DispatchKeySet arg1, @Const @ByVal TypeMeta data_type); @@ -130,7 +131,7 @@ public TensorImpl( @ByRef(true) Storage storage, @ByVal DispatchKeySet arg2, @Const @ByVal TypeMeta data_type) { super((Pointer)null); allocate(arg0, storage, arg2, data_type); } - private native void allocate( + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( ImplType arg0, @ByRef(true) Storage storage, @ByVal DispatchKeySet arg2, @@ -140,7 +141,7 @@ public TensorImpl( @ByRef(true) Storage storage, @ByVal DispatchKeySet arg2, @Const @ByVal TypeMeta data_type) { super((Pointer)null); allocate(arg0, storage, arg2, data_type); } - private native void allocate( + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( @Cast("c10::TensorImpl::ImplType") int arg0, @ByRef(true) Storage storage, @ByVal DispatchKeySet arg2, @@ -153,7 +154,7 @@ public TensorImpl( @ByVal DispatchKeySet arg0, @Const @ByVal TypeMeta data_type, @ByVal DeviceOptional device_opt) { super((Pointer)null); allocate(arg0, data_type, device_opt); } - private native void allocate( + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( @ByVal DispatchKeySet arg0, @Const @ByVal TypeMeta data_type, @ByVal DeviceOptional device_opt); @@ -164,7 +165,7 @@ public TensorImpl( @ByRef(true) Storage storage, DispatchKey dispatch_key, @Const @ByVal TypeMeta data_type) { super((Pointer)null); allocate(storage, dispatch_key, data_type); } - private native void allocate( + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( @ByRef(true) Storage storage, DispatchKey dispatch_key, @Const @ByVal TypeMeta data_type); @@ -172,7 +173,7 @@ public TensorImpl( @ByRef(true) Storage storage, @Cast("c10::DispatchKey") short dispatch_key, @Const @ByVal TypeMeta data_type) { super((Pointer)null); allocate(storage, dispatch_key, data_type); } - private native void allocate( + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( @ByRef(true) Storage storage, @Cast("c10::DispatchKey") short dispatch_key, @Const @ByVal TypeMeta data_type); @@ -180,7 +181,7 @@ public TensorImpl( DispatchKey dispatch_key, @Const @ByVal TypeMeta data_type, @ByVal DeviceOptional device_opt) { super((Pointer)null); allocate(dispatch_key, data_type, device_opt); } - private native void allocate( + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( DispatchKey dispatch_key, @Const @ByVal TypeMeta data_type, @ByVal DeviceOptional device_opt); @@ -188,7 +189,7 @@ public TensorImpl( @Cast("c10::DispatchKey") short dispatch_key, @Const @ByVal TypeMeta data_type, @ByVal DeviceOptional device_opt) { super((Pointer)null); allocate(dispatch_key, data_type, device_opt); } - private native void allocate( + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( @Cast("c10::DispatchKey") short dispatch_key, @Const @ByVal TypeMeta data_type, @ByVal DeviceOptional device_opt); @@ -452,7 +453,7 @@ public enum SizesStridesPolicy { public native @Cast("bool") boolean is_mps(); - public native @Cast("bool") boolean is_ort(); + public native @Cast("bool") boolean is_maia(); public native @Cast("bool") boolean is_nested(); @@ -683,11 +684,11 @@ public native void _set_fw_grad( */ public native @Cast("size_t") long itemsize(); - public native void set_backend_meta(@ByVal BackendMetaRef backend_meta); + public native void set_backend_meta(@IntrusivePtr("c10::BackendMeta") @Cast({"", "c10::intrusive_ptr&"}) BackendMeta backend_meta); public native BackendMeta get_backend_meta(); - public native @ByVal BackendMetaRef get_backend_meta_intrusive_ptr(); + public native @IntrusivePtr("c10::BackendMeta") @Cast({"", "c10::intrusive_ptr&"}) BackendMeta get_backend_meta_intrusive_ptr(); public native void release_storage_and_set_meta_custom_data_ptr_error_msg_( @ByVal StringOptional s); @@ -701,7 +702,7 @@ public native void release_storage_and_set_meta_custom_data_ptr_error_msg_( public native void set_sizes_and_strides( @ByVal SymIntArrayRef sizes, @ByVal SymIntArrayRef strides, - @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional storage_offset); + @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional storage_offset); public native void set_sizes_and_strides( @ByVal SymIntArrayRef sizes, @ByVal SymIntArrayRef strides); @@ -757,14 +758,14 @@ public native void set_sizes_and_strides( public native void set_sizes_and_strides( @ByVal LongArrayRef new_size, @ByVal LongArrayRef new_stride, - @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); + @ByVal(nullValue = "std::optional(c10::nullopt)") LongOptional storage_offset); public native void set_sizes_and_strides( @ByVal LongArrayRef new_size, @ByVal LongArrayRef new_stride); public native void set_sizes_and_strides( @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] new_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] new_stride, - @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); + @ByVal(nullValue = "std::optional(c10::nullopt)") LongOptional storage_offset); public native void set_sizes_and_strides( @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] new_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... new_stride); @@ -859,7 +860,7 @@ public native void set_named_tensor_meta( * compatible with SparseCUDA. */ public native @Cast("bool") boolean has_compatible_shallow_copy_type(@ByVal DispatchKeySet from); - public native @ByVal TensorImplPtr shallow_copy_and_detach( + public native @IntrusivePtr("c10::TensorImpl") @Cast({"", "c10::intrusive_ptr&"}) TensorImpl shallow_copy_and_detach( @Const @ByRef VariableVersion version_counter, @Cast("bool") boolean allow_tensor_metadata_change); @@ -876,7 +877,7 @@ public native void set_named_tensor_meta( * For why this function doesn't check this TensorImpl's * {@code allow_tensor_metadata_change_}, see NOTE [ TensorImpl Shallow-Copying ]. */ - public native void shallow_copy_from(@Const @ByRef TensorImplPtr impl); + public native void shallow_copy_from(@IntrusivePtr("c10::TensorImpl") @Cast({"", "c10::intrusive_ptr&"}) TensorImpl impl); // Inference tensor doesn't have version counter, // set_version_counter is no-op for them. @@ -1022,6 +1023,8 @@ public native void set_storage_and_dtype( public native @Cast("bool") boolean is_non_overlapping_and_dense(); + // if this returns true, then it is guaranteed that this tensor has symbolic + // sizes/strides public native @Cast("bool") boolean has_symbolic_sizes_strides(); public native void set_storage_access_should_throw(); public native void set_custom_sizes_strides(SizesStridesPolicy policy); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImplPtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImplPtr.java deleted file mode 100644 index 1292b6f9bf3..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImplPtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class TensorImplPtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public TensorImplPtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public TensorImplPtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public TensorImplPtr position(long position) { - return (TensorImplPtr)super.position(position); - } - @Override public TensorImplPtr getPointer(long i) { - return new TensorImplPtr((Pointer)this).offsetAddress(i); - } - - - public TensorImplPtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public TensorImplPtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public TensorImplPtr(TensorImpl target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(TensorImpl target, @ByVal DontIncreaseRefcount arg1); - - - - public TensorImplPtr(@ByRef(true) TensorImplPtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) TensorImplPtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) TensorImplPtr put(@ByRef(true) TensorImplPtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) TensorImpl get(); - - public native @ByRef @Name("operator *") @NoException(true) TensorImpl multiply(); - - public native @Name("operator ->") @NoException(true) TensorImpl access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef TensorImplPtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) TensorImpl release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal TensorImplPtr reclaim(TensorImpl owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal TensorImplPtr reclaim_copy(TensorImpl owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal TensorImplPtr unsafe_steal_from_new(TensorImpl raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal TensorImplPtr unsafe_adapt_non_heap_allocated( - TensorImpl raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal TensorImplPtr unsafe_reclaim_from_nonowning(TensorImpl raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImplSet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImplSet.java index b65a52d0fe2..e7f00edbf3f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImplSet.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImplSet.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImplVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImplVector.java index b40f599fced..470a607de08 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImplVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorImplVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndex.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndex.java index 0dd21fb1971..1a240effeca 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndex.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndex.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndexArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndexArrayRef.java index 86944182ff3..1832634f884 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndexArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndexArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndexVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndexVector.java index 39ead887886..898b3c94b30 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndexVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIndexVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIterator.java index 650ed6974c4..a696d55f373 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIteratorBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIteratorBase.java index 5a4dbcd14dc..e7905a308dc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIteratorBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIteratorBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -152,6 +153,12 @@ public class TensorIteratorBase extends MetaBase { public native void _unsafe_set_arg_strides(@Cast("const int64_t") long arg, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... strides); public native void _unsafe_set_arg_data(@Cast("const int64_t") long arg, Pointer data); + // Helper functions for custom device, custom device can get OperandInfo and + // NameVector in their side. + public native @ByRef OperandInfo operand(int arg/*=0*/); + public native @ByRef OperandInfo operand(); + public native @Cast("at::NameVector*") @ByRef SymDimVector get_dim_names(); + /** true if the stride computation can use 32-bit arithmetic. Used by GPU * kernels */ public native @Cast("bool") boolean can_use_32bit_indexing(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIteratorConfig.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIteratorConfig.java index b4311f0b508..c4d90bc3500 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIteratorConfig.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorIteratorConfig.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorList.java index 0077b8ebd2a..a65235cd2d0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::List") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::List") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TensorList extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorListIterator.java index 13f12f081f7..2f038486108 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorMaker.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorMaker.java index 5a82150042d..10805d57d17 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorMaker.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorMaker.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorMaybeOwned.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorMaybeOwned.java index 6ba8cb2ca3d..f2279dd4cb4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorMaybeOwned.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorMaybeOwned.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorName.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorName.java index fa4ff2f48a9..8628fdd29b9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorName.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorName.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorNames.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorNames.java index b94b4f48a69..d42a5db0cc6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorNames.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorNames.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptional.java index 96191717a14..948faed7ddf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TensorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalArrayRef.java index ad643b53596..9d6a06fdd7e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::ArrayRef >") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::ArrayRef >") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TensorOptionalArrayRef extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalElementReference.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalElementReference.java index b58622f3e23..92d620db654 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalElementReference.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalElementReference.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,17 +13,19 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::impl::ListElementReference,c10::detail::ListImpl::list_type::iterator>") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::impl::ListElementReference,c10::detail::ListImpl::list_type::iterator>") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TensorOptionalElementReference extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public TensorOptionalElementReference(Pointer p) { super(p); } - public native @Name("operator std::conditional_t >::type>::value,const c10::optional&,c10::optional >") @ByVal TensorOptional getTensorOptional(); + public native @Name("operator std::conditional_t >::type>,const std::optional&,std::optional >") @ByVal TensorOptional getTensorOptional(); @@ -35,7 +36,7 @@ public class TensorOptionalElementReference extends Pointer { public native @Const @ByRef IValue get(); - private static native @Namespace void swap(@ByRef(true) TensorOptionalElementReference lhs, @ByRef(true) TensorOptionalElementReference rhs); + private static native @Namespace @NoException(true) void swap(@ByRef(true) TensorOptionalElementReference lhs, @ByRef(true) TensorOptionalElementReference rhs); public void swap(TensorOptionalElementReference rhs) { swap(this, rhs); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalList.java index 8d0c37f04e1..52fbb9660f4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::List >") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::List >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TensorOptionalList extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ @@ -108,13 +109,13 @@ public class TensorOptionalList extends Pointer { * Returns an iterator to the first element of the container. * If the container is empty, the returned iterator will be equal to end(). */ - public native @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator begin(); + public native @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator begin(); /** * Returns an iterator to the element following the last element of the container. * This element acts as a placeholder; attempting to access it results in undefined behavior. */ - public native @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator end(); + public native @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator end(); /** * Checks if the container has no elements. @@ -141,7 +142,7 @@ public class TensorOptionalList extends Pointer { * Inserts value before pos. * May invalidate any references, pointers, or iterators referring to contained elements. Any past-the-end iterators may also be invalidated. */ - public native @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator insert(@ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator pos, @Const @ByRef TensorOptional value); + public native @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator insert(@ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator pos, @Const @ByRef TensorOptional value); /** * Inserts value before pos. @@ -181,13 +182,13 @@ public class TensorOptionalList extends Pointer { * Removes the element at pos. * May invalidate any references, pointers, or iterators referring to contained elements. Any past-the-end iterators may also be invalidated. */ - public native @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator erase(@ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator pos); + public native @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator erase(@ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator pos); /** * Removes the elements in the range [first, last). * May invalidate any references, pointers, or iterators referring to contained elements. Any past-the-end iterators may also be invalidated. */ - public native @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator erase(@ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator first, @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator last); + public native @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator erase(@ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator first, @ByVal @Cast("c10::List >::iterator*") TensorOptionalListIterator last); /** * Removes the last element of the container. diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalListIterator.java index 768383c9d25..b5a36636b92 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("c10::impl::ListIterator,c10::detail::ListImpl::list_type::iterator>") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("c10::impl::ListIterator,c10::detail::ListImpl::list_type::iterator>") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TensorOptionalListIterator extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ @@ -57,7 +58,7 @@ public class TensorOptionalListIterator extends Pointer { public native @ByVal @Name("operator -") TensorOptionalListIterator subtract(long offset); - private static native @Namespace @Cast("c10::impl::ListIterator,c10::detail::ListImpl::list_type::iterator>::difference_type") @Name("operator -") long subtract(@Const @ByRef TensorOptionalListIterator lhs, @Const @ByRef TensorOptionalListIterator rhs); + private static native @Namespace @Cast("c10::impl::ListIterator,c10::detail::ListImpl::list_type::iterator>::difference_type") @Name("operator -") long subtract(@Const @ByRef TensorOptionalListIterator lhs, @Const @ByRef TensorOptionalListIterator rhs); public long subtract(TensorOptionalListIterator rhs) { return subtract(this, rhs); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalVector.java index 1c8c7ddfd29..eda6ce845e2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptionalVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@Name("std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Name("std::vector >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TensorOptionalVector extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptions.java index fb3796f2af8..11d8b9f21f7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -158,7 +159,7 @@ public class TensorOptions extends Pointer { public native @ByVal @NoException(true) TensorOptions device(@ByVal DeviceOptional device); /** Return a copy of {@code TensorOptions} with {@code device} set to the given one. - * (This overload ensures that variadic template c10::optional constructor + * (This overload ensures that variadic template std::optional constructor * for Device work correctly.) */ /** Return a copy of {@code TensorOptions}, but with device set to CUDA, and the diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorTensorDict.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorTensorDict.java new file mode 100644 index 00000000000..6ed41eda394 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorTensorDict.java @@ -0,0 +1,161 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("c10::Dict") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class TensorTensorDict extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public TensorTensorDict(Pointer p) { super(p); } + + + /** + * Creates an empty dict. + */ + + /** + * Create a generic dict with runtime type information. + * This only works for c10::impl::GenericDict and is not part of the public API + * but only supposed to be used internally by PyTorch. + */ + public native @ByRef @Name("operator =") TensorTensorDict put(@Const @ByRef TensorTensorDict arg0); + + /** + * Create a new Dict pointing to a deep copy of the same data. + * The Dict returned is a new dict with separate storage. + * Changes in it are not reflected in the original dict or vice versa. + */ + public native @ByVal TensorTensorDict copy(); + + /** + * Returns an iterator to the first element of the container. + * If the container is empty, the returned iterator will be equal to end(). + */ + public native @ByVal TensorTensorDictIterator begin(); + + /** + * Returns an iterator to the element following the last element of the container. + * This element acts as a placeholder; attempting to access it results in undefined behavior. + */ + public native @ByVal TensorTensorDictIterator end(); + + /** + * Checks if the container has no elements. + */ + public native @Cast("bool") boolean empty(); + + /** + * Returns the number of elements in the container. + */ + public native @Cast("c10::Dict::size_type") long size(); + + /** + * Erases all elements from the container. After this call, size() returns zero. + * Invalidates any references, pointers, or iterators referring to contained elements. May also invalidate past-the-end iterators. + */ + public native void clear(); + + /** + * Inserts element(s) into the container, if the container doesn't already contain an element with an equivalent key. + * May invalidate any references, pointers, or iterators referring to contained elements. + * + * @return A pair consisting of an iterator to the inserted element (or to the element that prevented the insertion) and a bool denoting whether the insertion took place. + */ + + /** + * If an element with the given key already exists, it is overwritten with the given value. + * Otherwise, a new element with the given key and value are inserted. + * May invalidate any references, pointers, or iterators referring to contained elements. + * + * @return The bool component is true if the insertion took place and false if the assignment took place. The iterator component is pointing at the element that was inserted or updated. + */ + + /** + * Removes the element pointed to by iter. + * May invalidate any references, pointers, or iterators referring to contained elements. + * The iterator iter must be valid and dereferenceable. Thus the end() iterator (which is valid, but is not dereferenceable) cannot be used as a value for iter. + */ + public native void erase(@ByVal TensorTensorDictIterator iter); + + /** + * Removes the element with the given key, if it exists. + * May invalidate any references, pointers, or iterators referring to contained elements. + * + * @return The number of elements removed. This is either '1' if an element with the key existed, or '0' if it didn't. + */ + public native @Cast("size_t") long erase(@Const @ByRef Tensor key); + + /** + * Returns the mapped value of the element with key equivalent to key. + * If no such element exists, an exception of type std::out_of_range is thrown. + */ + public native @ByVal Tensor at(@Const @ByRef Tensor key); + + /** + * Finds an element with key equivalent to key. + * + * @return Iterator to an element with key equivalent to key. + * If no such element is found, past-the-end (see end()) iterator is returned. + */ + public native @ByVal TensorTensorDictIterator find(@Const @ByRef Tensor key); + + /** + * Checks if there is an element with key equivalent to key in the container. + * + * @return true if there is such an element, otherwise false. + */ + public native @Cast("bool") boolean contains(@Const @ByRef Tensor key); + + /** + * Increase the capacity so that at least count elements can be stored without + * having to reallocate or rehash. + */ + public native void reserve(@Cast("c10::Dict::size_type") long count); + + /** + * Value equality comparison. This function implements Python-like semantics for + * equality: two dicts with the same identity (e.g. same pointer) trivially + * compare equal, otherwise each element is compared for equality. + */ + + + + /** + * Identity comparison. Returns true if and only if {@code rhs} represents the same + * Dict object as {@code this}. + */ + public native @Cast("bool") boolean is(@Const @ByRef TensorTensorDict rhs); + + // private API for now because the return type will change to TypePtr + // instead of optional once types are mandatory. + public native @ByVal Type.TypePtr keyType(); + public native @ByVal Type.TypePtr valueType(); + + // [unsafe set type] + // These functions mutate the tagged type of this dictionary in place. + // There is no checking that the members of the dictionary are instances + // of the new types, nor is there a check that other IValues which + // hold references to this dictionary have the right static type. + // This functionality is used only in the unpickler, where at + // creation type the real type of the dictionary is unknown, but + // then later recovered from the static type information of the + // unpickled object. + public native void unsafeSetKeyType(@ByVal Type.TypePtr t); + public native void unsafeSetValueType(@ByVal Type.TypePtr t); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorTensorDictIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorTensorDictIterator.java new file mode 100644 index 00000000000..be54d6057df --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorTensorDictIterator.java @@ -0,0 +1,45 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("c10::impl::DictIterator") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class TensorTensorDictIterator extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public TensorTensorDictIterator(Pointer p) { super(p); } + + // C++17 friendly std::iterator implementation + public native @ByRef @Name("operator =") TensorTensorDictIterator put(@Const @ByRef TensorTensorDictIterator rhs); + + public native @ByRef @Name("operator ++") TensorTensorDictIterator increment(); + + public native @ByVal @Name("operator ++") TensorTensorDictIterator increment(int arg0); + + public native @Const @ByRef @Name("operator *") GenericDictEntryRef multiply(); + + public native @Const @Name("operator ->") GenericDictEntryRef access(); + + + + private static native @Namespace @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef TensorTensorDictIterator lhs, @Const @ByRef TensorTensorDictIterator rhs); + public boolean equals(TensorTensorDictIterator rhs) { return equals(this, rhs); } + + private static native @Namespace @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef TensorTensorDictIterator lhs, @Const @ByRef TensorTensorDictIterator rhs); + public boolean notEquals(TensorTensorDictIterator rhs) { return notEquals(this, rhs); } +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorType.java index ae7ec39e4c0..e1a1feb039b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -34,7 +35,7 @@ public class TensorType extends SharedType { @Const @ByRef LongVaryingShape sizes, @Const @ByRef LongVaryingShape strides, @ByVal BoolOptional requires_grad, - @ByVal(nullValue = "c10::optional(false)") BoolOptional undefined, + @ByVal(nullValue = "std::optional(false)") BoolOptional undefined, @Cast("bool") boolean tensor_contiguity/*=false*/); public static native @SharedPtr("c10::TensorType") @ByVal TensorType create( @ByVal ScalarTypeOptional scalar_type, @@ -49,7 +50,7 @@ public class TensorType extends SharedType { @Const @ByRef SymbolicShape sizes, @Const @ByRef StrideVaryingShape stride_, @ByVal BoolOptional requires_grad, - @ByVal(nullValue = "c10::optional(false)") BoolOptional undefined); + @ByVal(nullValue = "std::optional(false)") BoolOptional undefined); public static native @SharedPtr("c10::TensorType") @ByVal TensorType create( @ByVal ScalarTypeOptional scalar_type, @ByVal DeviceOptional device, diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorVector.java index 1f466cf6542..4276284ba1c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorVectorOptional.java index c3c45025ab4..dde412bee57 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TensorVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TensorVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TensorVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TernaryIf.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TernaryIf.java index 1e95268972d..9f3f90ba44a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TernaryIf.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TernaryIf.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class TernaryIf extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public TernaryIf(Pointer p) { super(p); } - public TernaryIf(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public TernaryIf(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr cond(); public native @ByVal Expr true_expr(); public native @ByVal Expr false_expr(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadIdGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadIdGuard.java index 25ac95cfadf..532f6dee1ae 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadIdGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadIdGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalDebugInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalDebugInfo.java index 1ad22c0465a..bb0b66118af 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalDebugInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalDebugInfo.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalPythonObjects.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalPythonObjects.java index a166d5b0550..d1a1541026c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalPythonObjects.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalPythonObjects.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalState.java index e24fec1e99a..a4320b08790 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalState.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalStateGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalStateGuard.java index f9ec90b76f8..d1f6b77ed48 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalStateGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ThreadLocalStateGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdImpl.java index 190eccca7cf..f85c508bb37 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Threshold ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** Applies the Threshold function element-wise. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Threshold to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Threshold to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::ThresholdOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdImplCloneable.java index 2850f512aa9..d7e5da13983 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ThresholdImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdOptions.java index c106ea5ddc7..4709b4bde49 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ThresholdOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Timer.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Timer.java new file mode 100644 index 00000000000..6ef16980e22 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Timer.java @@ -0,0 +1,59 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class Timer extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public Timer(Pointer p) { super(p); } + + public enum Event { + kForwardStart((byte)(0)), + kBackwardComputeStart((byte)(1)), + kBackwardComputeEnd((byte)(2)), + kBackwardCommStart((byte)(3)), + kBackwardCommEnd((byte)(4)); + + public final byte value; + private Event(byte v) { this.value = v; } + private Event(Event e) { this.value = e.value; } + public Event intern() { for (Event e : values()) if (e.value == value) return e; return this; } + @Override public String toString() { return intern().name(); } + } + + // Record the current event, i.e., mark it as having occurred now. Default + // CPU implementation. + public native void record(Event event); + public native void record(@Cast("c10d::Timer::Event") byte event); + + // Return the difference between when two events occurred, in nanoseconds. + // Or nullopt if one of them hasn't been recorded. + public native @ByVal LongOptional measureDifference(Event start, Event end); + public native @ByVal LongOptional measureDifference(@Cast("c10d::Timer::Event") byte start, @Cast("c10d::Timer::Event") byte end); + + // Return host-side timestamp, or nullopt if it has not yet been recorded. + public native @ByVal LongOptional getTimestamp(Event event); + public native @ByVal LongOptional getTimestamp(@Cast("c10d::Timer::Event") byte event); + + // Return host-side time member variable corresponding to the given event. + public native @Cast("int64_t*") @ByRef LongPointer getTimeRef(Event event); + public native @Cast("int64_t*") @ByRef LongBuffer getTimeRef(@Cast("c10d::Timer::Event") byte event); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Token.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Token.java index 765e90313c3..810057c5595 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Token.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Token.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TorchDispatchModeTLS.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TorchDispatchModeTLS.java index e3c580d9e6b..304bd715320 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TorchDispatchModeTLS.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TorchDispatchModeTLS.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -41,31 +42,28 @@ public class TorchDispatchModeTLS extends Pointer { // If you're pushing an infra mode onto the stack, we expect // you to use set_mode public static native void push_non_infra_mode_onto_stack( - @SharedPtr("c10::SafePyObject") @ByVal SafePyObject mode); + @SharedPtr("c10::impl::PyObject_TorchDispatchMode") @ByVal PyObject_TorchDispatchMode mode); // Pops the top mode of the stack, // giving precedence to user modes before attempting to pop // any infra modes - public static native @Const @SharedPtr("c10::SafePyObject") @ByVal SafePyObject pop_stack(); + public static native @Const @SharedPtr("c10::impl::PyObject_TorchDispatchMode") @ByVal PyObject_TorchDispatchMode pop_stack(); // Returns the highest-priority infra mode on the stack, // along with its mode key. - public static native @Const @ByVal T_SafePyObjectTorchDispatchModeKey_T pop_highest_infra_mode(); + public static native @Const @ByVal T_PyObject_TorchDispatchModeTorchDispatchModeKey_T pop_highest_infra_mode(); - public static native @Const @SharedPtr("c10::SafePyObject") @ByRef SafePyObject get_stack_at(@Cast("int64_t") long idx); + public static native @Const @SharedPtr("c10::impl::PyObject_TorchDispatchMode") @ByRef PyObject_TorchDispatchMode get_stack_at( + @Cast("int64_t") long idx); public static native @Cast("int64_t") long stack_len(); - public static native @Const @ByVal SafePyObjectOptional get_mode( - TorchDispatchModeKey mode_key); - public static native @Const @ByVal SafePyObjectOptional get_mode( - @Cast("c10::impl::TorchDispatchModeKey") byte mode_key); - public static native @Const @ByVal SafePyObjectOptional unset_mode( - TorchDispatchModeKey mode_key); - public static native @Const @ByVal SafePyObjectOptional unset_mode( - @Cast("c10::impl::TorchDispatchModeKey") byte mode_key); + public static native @Const @ByVal PyObject_TorchDispatchModeOptional get_mode(TorchDispatchModeKey mode_key); + public static native @Const @ByVal PyObject_TorchDispatchModeOptional get_mode(@Cast("c10::impl::TorchDispatchModeKey") byte mode_key); + public static native @Const @ByVal PyObject_TorchDispatchModeOptional unset_mode(TorchDispatchModeKey mode_key); + public static native @Const @ByVal PyObject_TorchDispatchModeOptional unset_mode(@Cast("c10::impl::TorchDispatchModeKey") byte mode_key); public static native void set_mode( - @Const @SharedPtr("c10::SafePyObject") @ByRef SafePyObject mode, + @Const @SharedPtr("c10::impl::PyObject_TorchDispatchMode") @ByRef PyObject_TorchDispatchMode mode, TorchDispatchModeKey mode_key); public static native void set_mode( - @Const @SharedPtr("c10::SafePyObject") @ByRef SafePyObject mode, + @Const @SharedPtr("c10::impl::PyObject_TorchDispatchMode") @ByRef PyObject_TorchDispatchMode mode, @Cast("c10::impl::TorchDispatchModeKey") byte mode_key); public static native @Const @ByRef TorchDispatchModeTLS get_state(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TraceState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TraceState.java new file mode 100644 index 00000000000..86a84676841 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TraceState.java @@ -0,0 +1,41 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("torch::dynamo::autograd") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class TraceState extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public TraceState(Pointer p) { super(p); } + + public TraceState( + @StdVector SymIntOptional ss, + @Cast("size_t") long num_outputs) { super((Pointer)null); allocate(ss, num_outputs); } + private native void allocate( + @StdVector SymIntOptional ss, + @Cast("size_t") long num_outputs); + + public native void debug_asserts(); + public native @ByVal SymIntOptional next_sym_size(); + + public native @Cast("size_t") long sym_sizes_index(); public native TraceState sym_sizes_index(long setter); + public native @StdVector SymIntOptional sym_sizes(); public native TraceState sym_sizes(SymIntOptional setter); + public native @ByRef TensorVector outputs(); public native TraceState outputs(TensorVector setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TraceableFunction.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TraceableFunction.java index 4a7952a2485..db6ac509d8b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TraceableFunction.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TraceableFunction.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderImpl.java index a3e2fa4533a..11cad937ab9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -23,7 +24,7 @@ /** TransformerDecoder is a stack of N decoder layers. * See - * https://pytorch.org/docs/master/generated/torch.nn.TransformerDecoder.html + * https://pytorch.org/docs/main/generated/torch.nn.TransformerDecoder.html * to learn abouut the exact behavior of this decoder module * * See the documentation for {@code torch::nn::TransformerDecoderOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderImplCloneable.java index e0a09023b31..d2c08206183 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class TransformerDecoderImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerImpl.java index 37d65fa799d..bb974816c10 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -28,7 +29,7 @@ * Polosukhin. 2017. Attention is all you need. In Advances in Neural * Information Processing Systems, pages 6000-6010. Users may modify or * implement in a different way during application. See - * https://pytorch.org/docs/master/nn.html#transformer-layers to learn about + * https://pytorch.org/docs/main/nn.html#transformer-layers to learn about * the exact behavior of this module. * * See the documentation for {@code torch::nn::TransformerDecoderLayerOptions} class diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerImplCloneable.java index 4be3281c13e..c0f4ca176b0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class TransformerDecoderLayerImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerOptions.java index 22e9d7ee2f9..c31c7347234 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderLayerOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderOptions.java index 28cf4012f29..518f4b92a7d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerDecoderOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderImpl.java index f2ec2b0690a..9f44d8a2a37 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -23,7 +24,7 @@ /** TransformerEncoder module. * See - * https://pytorch.org/docs/master/generated/torch.nn.TransformerEncoder.html + * https://pytorch.org/docs/main/generated/torch.nn.TransformerEncoder.html * to learn abouut the exact behavior of this encoder layer module. * * See the documentation for {@code torch::nn::TransformerEncoder} class to learn diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderImplCloneable.java index 84dd60da29c..460c9246c31 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class TransformerEncoderImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerImpl.java index 51d24df51b4..387d26ca32c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -23,7 +24,7 @@ /** TransformerEncoderLayer module. * See - * https://pytorch.org/docs/master/generated/torch.nn.TransformerEncoderLayer.html + * https://pytorch.org/docs/main/generated/torch.nn.TransformerEncoderLayer.html * to learn abouut the exact behavior of this encoder layer model * * See the documentation for {@code torch::nn::TransformerEncoderLayer} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerImplCloneable.java index 4fdea95b083..17d2da73ac3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class TransformerEncoderLayerImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerOptions.java index 7c3ee0e1976..acc9055afe4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderLayerOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderOptions.java index 8c501ab53e6..a28671eb78b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerEncoderOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerImpl.java index 072bb4e5ad3..33f1ebf8255 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerImplCloneable.java index 777bb01d885..ee4342c4bbf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class TransformerImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerOptions.java index 54e891dd753..3af4931b409 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TransformerOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Tree.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Tree.java index dc62252b6f2..47e1969acde 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Tree.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Tree.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,13 +26,13 @@ public class Tree extends Pointer { public Tree(Pointer p) { super(p); } public Tree(int kind_) { super((Pointer)null); allocate(kind_); } - private native void allocate(int kind_); + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(int kind_); public native int kind(); public native @Cast("bool") boolean isAtom(); public native @Const @ByRef SourceRange range(); public native @StdString BytePointer stringValue(); public native @Cast("const torch::jit::TreeList*") @ByRef SymDimVector trees(); - public native @Const @ByRef TreeRef tree(@Cast("size_t") long i); + public native @IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree(@Cast("size_t") long i); public native void matchNumSubtrees(int k, @Cast("size_t") long expected_subtrees); public native void matchNumSubtreesD( diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TreeRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TreeRef.java deleted file mode 100644 index a827f9811ae..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TreeRef.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class TreeRef extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public TreeRef(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public TreeRef(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public TreeRef position(long position) { - return (TreeRef)super.position(position); - } - @Override public TreeRef getPointer(long i) { - return new TreeRef((Pointer)this).offsetAddress(i); - } - - - public TreeRef() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public TreeRef(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public TreeRef(Tree target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(Tree target, @ByVal DontIncreaseRefcount arg1); - - - - public TreeRef(@ByRef(true) TreeRef rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) TreeRef rhs); - - public native @ByRef @Name("operator =") @NoException(true) TreeRef put(@ByRef(true) TreeRef rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) Tree get(); - - public native @ByRef @Name("operator *") @NoException(true) Tree multiply(); - - public native @Name("operator ->") @NoException(true) Tree access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef TreeRef rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) Tree release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal TreeRef reclaim(Tree owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal TreeRef reclaim_copy(Tree owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal TreeRef unsafe_steal_from_new(Tree raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal TreeRef unsafe_adapt_non_heap_allocated( - Tree raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal TreeRef unsafe_reclaim_from_nonowning(Tree raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TreeStringMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TreeStringMap.java new file mode 100644 index 00000000000..0b4a650411a --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TreeStringMap.java @@ -0,0 +1,50 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + +@Name("std::unordered_map") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class TreeStringMap extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public TreeStringMap(Pointer p) { super(p); } + public TreeStringMap() { allocate(); } + private native void allocate(); + public native @Name("operator =") @ByRef TreeStringMap put(@ByRef TreeStringMap x); + + public boolean empty() { return size() == 0; } + public native long size(); + + @Index public native @StdString BytePointer get(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree i); + public native TreeStringMap put(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree i, BytePointer value); + @ValueSetter @Index public native TreeStringMap put(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree i, @StdString String value); + + public native void erase(@ByVal Iterator pos); + public native @ByVal Iterator begin(); + public native @ByVal Iterator end(); + @NoOffset @Name("iterator") public static class Iterator extends Pointer { + public Iterator(Pointer p) { super(p); } + public Iterator() { } + + public native @Name("operator ++") @ByRef Iterator increment(); + public native @Name("operator ==") boolean equals(@ByRef Iterator it); + public native @Name("operator *().first") @MemberGetter @IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree first(); + public native @Name("operator *().second") @MemberGetter @StdString BytePointer second(); + } +} + diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TreeView.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TreeView.java index 3d06180de3a..1d968f327ec 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TreeView.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TreeView.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -111,11 +112,11 @@ public class TreeView extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public TreeView(Pointer p) { super(p); } - public TreeView(@ByVal TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@ByVal TreeRef tree); - public native @ByVal TreeRef tree(); + public TreeView(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); + public native @IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree(); public native @Const @ByRef SourceRange range(); - public native @ByVal @Name("operator torch::jit::TreeRef") TreeRef asTreeRef(); + public native @Name("operator torch::jit::TreeRef") @IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree asTree(); public native int kind(); public native void dump(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossImpl.java index 5122ca9d766..2b291661fbe 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -27,7 +28,7 @@ * samples. A triplet is composed by {@code a}, {@code p} and {@code n} (i.e., {@code anchor}, * {@code positive examples} and {@code negative examples} respectively). The * shapes of all input tensors should be :math:{@code (N, D)}. - * See https://pytorch.org/docs/master/nn.html#torch.nn.TripletMarginLoss to + * See https://pytorch.org/docs/main/nn.html#torch.nn.TripletMarginLoss to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::TripletMarginLossOptions} class to diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossImplCloneable.java index ad737e830b5..13f64617d71 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class TripletMarginLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossOptions.java index 901deced051..06cd3245f20 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossImpl.java index 242a2cf4aa6..4cc9f9f7111 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -29,7 +30,7 @@ * and positive example ("positive distance") and the anchor and negative * example ("negative distance"). * See - * https://pytorch.org/docs/master/nn.html#torch.nn.TripletMarginWithDistanceLoss + * https://pytorch.org/docs/main/nn.html#torch.nn.TripletMarginWithDistanceLoss * to learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::TripletMarginWithDistanceLossOptions} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossImplCloneable.java index 843c61c63c4..2658e6b65ef 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class TripletMarginWithDistanceLossImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossOptions.java index 365bfb1b0e7..edf7315c790 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TripletMarginWithDistanceLossOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional @@ -45,7 +46,7 @@ public class TripletMarginWithDistanceLossOptions extends Pointer { return new TripletMarginWithDistanceLossOptions((Pointer)this).offsetAddress(i); } - public native @Cast("c10::optional*") @ByRef @NoException(true) Pointer distance_function(); + public native @Cast("std::optional*") @ByRef @NoException(true) Pointer distance_function(); public native @ByRef @NoException(true) DoublePointer margin(); public native @Cast("bool*") @ByRef @NoException(true) BoolPointer swap(); public native @ByRef @NoException(true) LossReduction reduction(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Tuple.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Tuple.java index b656e64ec66..71703d991d6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Tuple.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Tuple.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -26,28 +27,28 @@ public class Tuple extends Pointer { // named tuples have additional type information, so we // directly create them tagged - public static native @ByVal TuplePtr createNamed( + public static native @IntrusivePtr("c10::ivalue::Tuple") @Cast({"", "c10::intrusive_ptr&"}) Tuple createNamed( @ByVal IValueVector elements_, @ByVal Type.TypePtr type_); - public static native @ByVal TuplePtr createNamed( + public static native @IntrusivePtr("c10::ivalue::Tuple") @Cast({"", "c10::intrusive_ptr&"}) Tuple createNamed( @ByVal TupleElements elements_, @SharedPtr TupleType type_); // MSVC apparently can't disambiguate the other two overloads of // create when passed an initializer_list without this. - public static native @ByVal TuplePtr create(@ByVal IValueVector elements_); + public static native @IntrusivePtr("c10::ivalue::Tuple") @Cast({"", "c10::intrusive_ptr&"}) Tuple create(@ByVal IValueVector elements_); - public static native @ByVal TuplePtr create(@ByVal TupleElements elements_); + public static native @IntrusivePtr("c10::ivalue::Tuple") @Cast({"", "c10::intrusive_ptr&"}) Tuple create(@ByVal TupleElements elements_); - public static native @ByVal TuplePtr create(@ByVal IValueArrayRef elements_); + public static native @IntrusivePtr("c10::ivalue::Tuple") @Cast({"", "c10::intrusive_ptr&"}) Tuple create(@ByVal IValueArrayRef elements_); - public static native @ByVal TuplePtr create(@ByVal IValue e1); + public static native @IntrusivePtr("c10::ivalue::Tuple") @Cast({"", "c10::intrusive_ptr&"}) Tuple create(@ByVal IValue e1); - public static native @ByVal TuplePtr create(@ByVal IValue e1, @ByVal IValue e2); + public static native @IntrusivePtr("c10::ivalue::Tuple") @Cast({"", "c10::intrusive_ptr&"}) Tuple create(@ByVal IValue e1, @ByVal IValue e2); - public static native @ByVal TuplePtr create(@ByVal IValue e1, @ByVal IValue e2, @ByVal IValue e3); + public static native @IntrusivePtr("c10::ivalue::Tuple") @Cast({"", "c10::intrusive_ptr&"}) Tuple create(@ByVal IValue e1, @ByVal IValue e2, @ByVal IValue e3); // Again, it would be nice to make this noncopyable, but there's a // lot of extant code that copies Tuples. diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TupleElements.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TupleElements.java index a4fc05fa7bc..ee06fb7872a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TupleElements.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TupleElements.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TupleLiteral.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TupleLiteral.java index 001dcc739e4..9488ef23209 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TupleLiteral.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TupleLiteral.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class TupleLiteral extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public TupleLiteral(Pointer p) { super(p); } - public TupleLiteral(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public TupleLiteral(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal ExprList inputs(); public static native @ByVal TupleLiteral create( @Const @ByRef SourceRange range, diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TuplePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TuplePtr.java deleted file mode 100644 index 8cd45265936..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TuplePtr.java +++ /dev/null @@ -1,154 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class TuplePtr extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public TuplePtr(Pointer p) { super(p); } - /** Native array allocator. Access with {@link Pointer#position(long)}. */ - public TuplePtr(long size) { super((Pointer)null); allocateArray(size); } - private native void allocateArray(long size); - @Override public TuplePtr position(long position) { - return (TuplePtr)super.position(position); - } - @Override public TuplePtr getPointer(long i) { - return new TuplePtr((Pointer)this).offsetAddress(i); - } - - - public TuplePtr() { super((Pointer)null); allocate(); } - @NoException(true) private native void allocate(); - - public TuplePtr(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0) { super((Pointer)null); allocate(arg0); } - @NoException(true) private native void allocate(@ByVal @Cast("std::nullptr_t*") PointerPointer arg0); - - // This constructor will not increase the ref counter for you. - // We use the tagged dispatch mechanism to explicitly mark this constructor - // to not increase the refcount - public TuplePtr(Tuple target, @ByVal DontIncreaseRefcount arg1) { super((Pointer)null); allocate(target, arg1); } - @NoException(true) private native void allocate(Tuple target, @ByVal DontIncreaseRefcount arg1); - - - - public TuplePtr(@ByRef(true) TuplePtr rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) TuplePtr rhs); - - public native @ByRef @Name("operator =") @NoException(true) TuplePtr put(@ByRef(true) TuplePtr rhs); - - // Assignment is implemented using copy and swap. That's safe for self - // assignment. - // NOLINTNEXTLINE(bugprone-unhandled-self-assignment) - - public native @NoException(true) Tuple get(); - - public native @ByRef @Name("operator *") @NoException(true) Tuple multiply(); - - public native @Name("operator ->") @NoException(true) Tuple access(); - - public native @Cast("bool") @Name("operator bool") @NoException(true) boolean asBoolean(); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef TuplePtr rhs); - - // We do a lot of null-pointer checks in our code, good to have this be cheap. - public native @Cast("bool") @NoException(true) boolean defined(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean unique(); - - /** - * Returns an owning (!) pointer to the underlying object and makes the - * intrusive_ptr instance invalid. That means the refcount is not decreased. - * You *must* put the returned pointer back into a intrusive_ptr using - * intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) Tuple release(); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr that takes - * over ownership. That means the refcount is not increased. - * This is the counter-part to intrusive_ptr::release() and the pointer - * passed in *must* have been created using intrusive_ptr::release(). - */ - public static native @ByVal TuplePtr reclaim(Tuple owning_ptr); - - /** - * Takes an owning pointer to TTarget* and creates an intrusive_ptr - * representing a new reference, i.e. the raw pointer retains - * ownership. - */ - public static native @ByVal TuplePtr reclaim_copy(Tuple owning_ptr); - - /** - * Allocate a heap object with args and wrap it inside a intrusive_ptr and - * incref. This is a helper function to let make_intrusive() access private - * intrusive_ptr constructors. - */ - - /** - * Turn a new instance of TTarget (e.g., literally allocated - * using new TTarget(...) into an intrusive_ptr. If possible, - * use intrusive_ptr::make instead which statically guarantees - * that the allocation was done properly. - * - * At the moment, the only reason this method exists is because - * pybind11 holder types expect to be able to allocate in - * this way (because pybind11 handles the new allocation itself). - */ - public static native @ByVal TuplePtr unsafe_steal_from_new(Tuple raw_ptr); - - /** - * Turn an instance of TTarget that should not be reference counted - * (e.g., allocated into an arena with placement new) into an - * intrusive_ptr. This is gratuitously unsafe and should only be - * used if you can guarantee that the pointer will not escape and be - * refcounted as normal. - * - * {@code expected_decrefs} is a debugging parameter: it indicates the - * number of strong owners the intrusive_ptr_target in question is - * expected to get. In most use cases, this will likely be 1. - * - * The reason this method exists is for manually sharing - * StorageImpls across Tensors in the static runtime. It needs - * access to private intrusive_ptr members so that the refcounts can - * be initialized to custom values. - */ - public static native @ByVal TuplePtr unsafe_adapt_non_heap_allocated( - Tuple raw_ptr, - @Cast("uint32_t") int expected_decrefs); - - /** - * Turn a **non-owning raw pointer** to an intrusive_ptr. It is - * the moral equivalent of enable_shared_from_this on a shared pointer. - * - * This method is only valid for objects that are already live. If - * you are looking for the moral equivalent of unique_ptr(T*) - * constructor, see steal_from_new. - * - * TODO: https://github.com/pytorch/pytorch/issues/56482 - */ - public static native @ByVal TuplePtr unsafe_reclaim_from_nonowning(Tuple raw_ptr); -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TupleType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TupleType.java index 845886237fd..5efcfaa6327 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TupleType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TupleType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Type.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Type.java index 3d3fa15e26d..03d62d07110 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Type.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Type.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail @@ -106,7 +107,7 @@ public class Type extends Pointer { // // Takes a custom printer that users can pass in to customize the output of // this method. - public native @StdString BytePointer annotation_str(@ByVal TypePrinter printer); + public native @StdString BytePointer annotation_str(@Const @ByRef TypePrinter printer); public native @StdString BytePointer annotation_str(); // Returns a human readable string that includes additional information like diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeArrayRef.java index 5a2bdc7f5a8..741a99fd0c8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeEnv.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeEnv.java index 47c9e5905a9..549230d9315 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeEnv.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeEnv.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeError.java deleted file mode 100644 index 2c2eded530e..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeError.java +++ /dev/null @@ -1,29 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -// Used in ATen for invalid types. These turn into -// TypeError when they cross to Python. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class TypeError extends Error { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public TypeError(Pointer p) { super(p); } - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeIdentifier.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeIdentifier.java index 089ffbe691b..cfcdeeeb596 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeIdentifier.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeIdentifier.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeMeta.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeMeta.java index 7ac2097e510..62f4ee4ff8a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeMeta.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeMeta.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeMetaOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeMetaOptional.java index 2e75748906a..9956001868e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeMetaOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeMetaOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TypeMetaOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TypePtrOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TypePtrOptional.java index 7c171476541..590a4644958 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TypePtrOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TypePtrOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class TypePtrOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeVector.java index 4f6a84dac40..076e32774e8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/TypeVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/TypeVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UnaryOp.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UnaryOp.java index a77d79a23db..86706000444 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UnaryOp.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UnaryOp.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,7 +25,7 @@ public class UnaryOp extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public UnaryOp(Pointer p) { super(p); } - public UnaryOp(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public UnaryOp(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public static native @ByVal UnaryOp create(@Const @ByRef SourceRange range, int kind, @Const @ByRef Expr expr); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UndefinedTensorImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UndefinedTensorImpl.java index 4d106774c82..f6c0e9db1e8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UndefinedTensorImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UndefinedTensorImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenImpl.java index cbf9f84ecf5..f7a7ca85ddd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /** A placeholder for unflatten operator - * See https://pytorch.org/docs/master/generated/torch.nn.Unflatten.html to + * See https://pytorch.org/docs/main/generated/torch.nn.Unflatten.html to * learn about the exact behavior of this module. * * See the documentation for {@code torch::nn::UnflattenOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenImplCloneable.java index 00044bc8d64..54d785a7a7b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class UnflattenImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenOptions.java index ed31909bb3a..6b53fd6e74c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UnflattenOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldImpl.java index e0b6dd05e50..f090d35d84f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -21,7 +22,7 @@ // ============================================================================ /** Applies unfold over a 4-D input. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Unfold to learn about + * See https://pytorch.org/docs/main/nn.html#torch.nn.Unfold to learn about * the exact behavior of this module. * * See the documentation for {@code torch::nn::UnfoldOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldImplCloneable.java index 00ab692334a..e4e50c2e4f9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class UnfoldImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldOptions.java index aef2aba253a..26061510e19 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UnfoldOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace functional diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UnionType.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UnionType.java index f19f04a6334..a4d7e1d1dfb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UnionType.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UnionType.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UniqueVoidPtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UniqueVoidPtr.java index 490a7489628..257aa3a27ab 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UniqueVoidPtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UniqueVoidPtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Unpickler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Unpickler.java index 12b3e5f58cf..8d277373d37 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Unpickler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Unpickler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleImpl.java index 8747042ea58..cae5d1ac0a9 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -22,7 +23,7 @@ /** Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D * (volumetric) data. - * See https://pytorch.org/docs/master/nn.html#torch.nn.Upsample to learn + * See https://pytorch.org/docs/main/nn.html#torch.nn.Upsample to learn * about the exact behavior of this module. * * See the documentation for {@code torch::nn::UpsampleOptions} class to learn what diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleImplCloneable.java index 5c93181455a..b89e656d0fd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class UpsampleImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleMode.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleMode.java index ec485e288b3..fa1af5acb53 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleMode.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleMode.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleOptions.java index 13328e5c20f..be18fe3ba99 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/UpsampleOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Use.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Use.java index 8e7e8a61856..3ce669f465d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Use.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Use.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Value.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Value.java index 1d573735a53..9a8f589114d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Value.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Value.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -31,7 +32,7 @@ public class Value extends Pointer { public native Value setType(@ByVal Type.TypePtr type); public native void inferTypeFrom(@Const @ByRef Tensor output); public native void inferTypeFrom( - @Const @ByRef ObjPtr output); + @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj output); public native @Const @ByRef Type.TypePtr type(); public native @Cast("bool") boolean requires_grad(); public native @Cast("bool") boolean isCompleteTensor(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ValueArrayRef.java index afb3eebf77b..f15a9d0727f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ValueArrayRef.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ValueError.java deleted file mode 100644 index 32d406a81cd..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueError.java +++ /dev/null @@ -1,29 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -// Used in ATen for invalid values. These turn into -// ValueError when they cross to Python. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class ValueError extends Error { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public ValueError(Pointer p) { super(p); } - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ValueOptional.java index f3329634c87..c04c989458b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ValueOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class ValueOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueValueMap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ValueValueMap.java index 1ddb183b00c..ff87c970fb7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueValueMap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ValueValueMap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ValueVector.java index 2c1ba685423..476178060b3 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ValueVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueWrap.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ValueWrap.java index 121fe0d63c8..3e3add40386 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ValueWrap.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ValueWrap.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Var.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Var.java index c8be0826110..b4af9323dd2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Var.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Var.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class Var extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public Var(Pointer p) { super(p); } - public Var(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public Var(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Ident name(); public static native @ByVal Var create(@Const @ByRef SourceRange range, @Const @ByRef Ident name); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/VarMaybe.java b/pytorch/src/gen/java/org/bytedeco/pytorch/VarMaybe.java index 44ae63116dc..aeca92647a7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/VarMaybe.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/VarMaybe.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class VarMaybe extends TreeView { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public VarMaybe(Pointer p) { super(p); } - public VarMaybe(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public VarMaybe(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); /* implicit */ public VarMaybe(@Const @ByRef Var tree) { super((Pointer)null); allocate(tree); } private native void allocate(@Const @ByRef Var tree); public native @Cast("bool") boolean present(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/VariableHooksInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/VariableHooksInterface.java index 7f931ca0b20..50a6cf7125d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/VariableHooksInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/VariableHooksInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace torch::autograd @@ -26,7 +27,8 @@ public class VariableHooksInterface extends Pointer { public native @ByVal TensorBase tensor_data(@Const @ByRef TensorBase arg0); public native @ByVal TensorBase variable_data(@Const @ByRef TensorBase arg0); - public native @SharedPtr Node grad_fn(@Const @ByRef TensorBase arg0); + public native @SharedPtr Node grad_fn( + @Const @ByRef TensorBase arg0); public native void remove_hook(@Const @ByRef TensorBase arg0, @Cast("unsigned") int pos); public native @Cast("bool") boolean is_view(@Const @ByRef TensorBase arg0); @@ -39,8 +41,21 @@ public class VariableHooksInterface extends Pointer { public native @Cast("int64_t") long _version(@Const @ByRef TensorBase arg0); public native void retain_grad(@Const @ByRef TensorBase arg0); public native @Cast("bool") boolean retains_grad(@Const @ByRef TensorBase arg0); - public native void _backward(@Const @ByRef Tensor arg0, @ByVal TensorArrayRef arg1, @Const @ByRef TensorOptional arg2, @ByVal BoolOptional arg3, @Cast("bool") boolean arg4); - public native void _backward(@Const @ByRef Tensor arg0, @ByVal TensorVector arg1, @Const @ByRef TensorOptional arg2, @ByVal BoolOptional arg3, @Cast("bool") boolean arg4); + public native void _backward( + @Const @ByRef Tensor arg0, + @ByVal TensorArrayRef arg1, + @Const @ByRef TensorOptional arg2, + @ByVal BoolOptional arg3, + @Cast("bool") boolean arg4); + public native void _backward( + @Const @ByRef Tensor arg0, + @ByVal TensorVector arg1, + @Const @ByRef TensorOptional arg2, + @ByVal BoolOptional arg3, + @Cast("bool") boolean arg4); public native void requires_grad_(@Const @ByRef TensorBase arg0, @Cast("bool") boolean arg1); - public native void basic_autograd_not_implemented_fallback(@Const @ByRef OperatorHandle op, @ByVal DispatchKeySet dispatch_keys, IValueVector stack); + public native void basic_autograd_not_implemented_fallback( + @Const @ByRef OperatorHandle op, + @ByVal DispatchKeySet dispatch_keys, + IValueVector stack); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/VariableInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/VariableInfo.java index a7e0b27fc0f..412f856cc46 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/VariableInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/VariableInfo.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/VariableVersion.java b/pytorch/src/gen/java/org/bytedeco/pytorch/VariableVersion.java index 63a34243158..95eb5d92416 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/VariableVersion.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/VariableVersion.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -59,7 +60,7 @@ // can introduce race conditions when we are running the forward pass in // multi-thread scenarios, thus making the forward pass not thread-safe anymore, // which breaks the invariant. -@Namespace("c10") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class VariableVersion extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WarnAlways.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WarnAlways.java index fa982c1e7c7..f3334fd8e0d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WarnAlways.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WarnAlways.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Warning.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Warning.java index e53d5a75a31..f09b8b1172a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/Warning.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Warning.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WarningHandler.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WarningHandler.java index ffdaef86318..31ad20c6811 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WarningHandler.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WarningHandler.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WarningHandlerGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WarningHandlerGuard.java index 63f868ea91f..52ad2c1a4d8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WarningHandlerGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WarningHandlerGuard.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WarningVariant.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WarningVariant.java index cd14f008c17..817123eb882 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WarningVariant.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WarningVariant.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakIValue.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakIValue.java index 19bcdcb5ddd..9fc90716e5c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakIValue.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakIValue.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakOrStrongCompilationUnit.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakOrStrongCompilationUnit.java index b2a41d9fbc3..b24f306e0ab 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakOrStrongCompilationUnit.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakOrStrongCompilationUnit.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -26,16 +27,18 @@ public class WeakOrStrongCompilationUnit extends Pointer { public WeakOrStrongCompilationUnit(Pointer p) { super(p); } public WeakOrStrongCompilationUnit( - @SharedPtr CompilationUnit shared_cu) { super((Pointer)null); allocate(shared_cu); } + @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit shared_cu) { super((Pointer)null); allocate(shared_cu); } private native void allocate( - @SharedPtr CompilationUnit shared_cu); + @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit shared_cu); - public native @SharedPtr CompilationUnit getStrongRefOrThrow(); + public native @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit getStrongRefOrThrow(); + + public native @WeakPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit getWeakRefOrThrow(); public native @Cast("bool") boolean holdingStrongRef(); public native @Cast("bool") boolean holdingEmptyStrongRef(); - public native @ByRef @Cast("c10::optional >*") Pointer strong_ptr_(); public native WeakOrStrongCompilationUnit strong_ptr_(Pointer setter); - public native @ByRef @Cast("c10::optional >*") Pointer weak_ptr_(); public native WeakOrStrongCompilationUnit weak_ptr_(Pointer setter); + public native @ByRef @Cast("std::optional >*") Pointer strong_ptr_(); public native WeakOrStrongCompilationUnit strong_ptr_(Pointer setter); + public native @ByRef @Cast("std::optional >*") Pointer weak_ptr_(); public native WeakOrStrongCompilationUnit weak_ptr_(Pointer setter); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakOrStrongTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakOrStrongTypePtr.java index d763d1ac1b6..c1ce2b4b66e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakOrStrongTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakOrStrongTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakStorage.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakStorage.java deleted file mode 100644 index f0b7bfa9d7c..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakStorage.java +++ /dev/null @@ -1,102 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch; - -import org.bytedeco.pytorch.Allocator; -import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; -import org.bytedeco.pytorch.Module; -import org.bytedeco.javacpp.annotation.Cast; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; - -import static org.bytedeco.pytorch.global.torch.*; - - -@Name("c10::weak_intrusive_ptr") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class WeakStorage extends Pointer { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public WeakStorage(Pointer p) { super(p); } - - - public WeakStorage(@Const @ByRef StorageImplPtr ptr) { super((Pointer)null); allocate(ptr); } - private native void allocate(@Const @ByRef StorageImplPtr ptr); - - public WeakStorage(@ByRef(true) WeakStorage rhs) { super((Pointer)null); allocate(rhs); } - @NoException(true) private native void allocate(@ByRef(true) WeakStorage rhs); - - public native @ByRef @Name("operator =") @NoException(true) WeakStorage put(@ByRef(true) WeakStorage rhs); - - public native @ByRef @Name("operator =") @NoException(true) WeakStorage put( - @Const @ByRef StorageImplPtr rhs); - - public native @NoException(true) void reset(); - - public native @NoException(true) void swap(@ByRef WeakStorage rhs); - - // NB: This should ONLY be used by the std::hash implementation - // for weak_intrusive_ptr. Another way you could do this is - // friend std::hash, but this triggers two - // bugs: - // - // (1) It triggers an nvcc bug, where std::hash in a friend class - // declaration gets preprocessed into hash, which then cannot - // actually be found. The error in this case looks like: - // - // error: no template named 'hash'; did you mean 'std::hash'? - // - // (2) On OS X, std::hash is declared as a struct, not a class. - // This twings: - // - // error: class 'hash' was previously declared as a struct - // [-Werror,-Wmismatched-tags] - // - // Both of these are work-aroundable, but on the whole, I decided - // it would be simpler and easier to make work if we just expose - // an unsafe getter for target_ - // - public native @NoException(true) StorageImpl _unsafe_get_target(); - - public native @Cast("uint32_t") @NoException(true) int use_count(); - - public native @Cast("uint32_t") @NoException(true) int weak_use_count(); - - public native @Cast("bool") @NoException(true) boolean expired(); - - public native @ByVal @NoException(true) StorageImplPtr lock(); - - /** - * Returns an owning (but still only weakly referenced) pointer to the - * underlying object and makes the weak_intrusive_ptr instance invalid. - * That means the weakcount is not decreased. - * You *must* put the returned pointer back into a weak_intrusive_ptr using - * weak_intrusive_ptr::reclaim(ptr) to properly destruct it. - * This is helpful for C APIs. - */ - public native @NoException(true) StorageImpl release(); - - /** - * Takes an owning (but must be weakly referenced) pointer to TTarget* and - * creates a weak_intrusive_ptr that takes over ownership. - * This means that the weakcount is not increased. - * This is the counter-part to weak_intrusive_ptr::release() and the pointer - * passed in *must* have been created using weak_intrusive_ptr::release(). - */ - public static native @ByVal WeakStorage reclaim(StorageImpl owning_weak_ptr); - - /** - * Takes a pointer to TTarget* (may be weak or strong) and creates a - * new weak_intrusive_ptr representing a new weak reference, i.e. - * the raw pointer retains ownership. - */ - public static native @ByVal WeakStorage reclaim_copy(StorageImpl owning_ptr); - - - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakStorageVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakStorageVector.java index d9435f2bc4c..39aa932390c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakStorageVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakStorageVector.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -29,9 +30,9 @@ public class WeakStorageVector extends Pointer { public boolean empty() { return size() == 0; } public native long size(); - public WeakStorage front() { return get(0); } - public WeakStorage back() { return get(size() - 1); } - @Index(function = "at") public native @ByRef WeakStorage get(@Cast("size_t") long i); + public StorageImpl front() { return get(0); } + public StorageImpl back() { return get(size() - 1); } + @Index(function = "at") public native @IntrusivePtr("c10::StorageImpl") @Cast({"", "c10::intrusive_ptr&"}) StorageImpl get(@Cast("size_t") long i); public native @ByVal Iterator begin(); public native @ByVal Iterator end(); @@ -41,7 +42,7 @@ public Iterator() { } public native @Name("operator ++") @ByRef Iterator increment(); public native @Name("operator ==") boolean equals(@ByRef Iterator it); - public native @Name("operator *") @ByRef @Const WeakStorage get(); + public native @Name("operator *") @IntrusivePtr("c10::StorageImpl") @Cast({"", "c10::intrusive_ptr&"}) StorageImpl get(); } } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakStorageVectorOptional.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakStorageVectorOptional.java index 8212007d90c..8c25b1c2137 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakStorageVectorOptional.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakStorageVectorOptional.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,10 +13,12 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -@NoOffset @Name("c10::optional > >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@NoOffset @Name("std::optional > >") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class WeakStorageVectorOptional extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakTypePtr.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakTypePtr.java index f9ff9a07e07..18f99d0e15d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WeakTypePtr.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WeakTypePtr.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -23,11 +24,13 @@ // into a graph, if we used a strong pointer we would have a circular reference // from Object -> CompilationUnit and CompilationUnit -> Graph (which owns the // Constant Object) -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +@Namespace("c10") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) public class WeakTypePtr extends Pointer { static { Loader.load(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public WeakTypePtr(Pointer p) { super(p); } + + public native @WeakPtr("torch::jit::CompilationUnit") @ByRef CompilationUnit cu_(); public native WeakTypePtr cu_(CompilationUnit setter); public native @ByRef Type.TypePtr type_(); public native WeakTypePtr type_(Type.TypePtr setter); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/While.java b/pytorch/src/gen/java/org/bytedeco/pytorch/While.java index 4d061051f10..83380bb36ad 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/While.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/While.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class While extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public While(Pointer p) { super(p); } - public While(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public While(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr cond(); public native @ByVal StmtList body(); public static native @ByVal While create( diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/With.java b/pytorch/src/gen/java/org/bytedeco/pytorch/With.java index ab1e12e56b2..aa142936bc6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/With.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/With.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -26,8 +27,8 @@ public class With extends Stmt { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public With(Pointer p) { super(p); } - public With(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public With(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal WithItemList targets(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WithItem.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WithItem.java index 99d5bf5787e..8b36c3a60da 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WithItem.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WithItem.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class WithItem extends Expr { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public WithItem(Pointer p) { super(p); } - public WithItem(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public WithItem(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal Expr target(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WithItemList.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WithItemList.java index d3e36a316a5..722d1614b08 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WithItemList.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WithItemList.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -25,8 +26,8 @@ public class WithItemList extends TreeView { public WithItemList(Pointer p) { super(p); } - public WithItemList(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); } - private native void allocate(@Const @ByRef TreeRef tree); + public WithItemList(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); } + private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree); public native @ByVal @Cast("torch::jit::List::iterator*") WithItemListIterator begin(); public native @ByVal @Cast("torch::jit::List::iterator*") WithItemListIterator end(); public native @Cast("bool") boolean empty(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WithItemListIterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WithItemListIterator.java index c719c705c3e..adbc0ba69ad 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WithItemListIterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WithItemListIterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -24,8 +25,8 @@ public class WithItemListIterator extends Pointer { /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ public WithItemListIterator(Pointer p) { super(p); } - public WithItemListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it) { super((Pointer)null); allocate(it); } - private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") TreeRef it); + public WithItemListIterator(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it) { super((Pointer)null); allocate(it); } + private native void allocate(@ByVal @Cast("torch::jit::TreeList::const_iterator*") Tree it); public native @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef WithItemListIterator rhs); public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef WithItemListIterator rhs); public native @ByVal @Name("operator *") WithItem multiply(); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/Work.java b/pytorch/src/gen/java/org/bytedeco/pytorch/Work.java new file mode 100644 index 00000000000..53acc26e275 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/Work.java @@ -0,0 +1,117 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +// Please do not use Work API, it is going away, to be +// replaced by ivalue::Future. +// Python binding for this class might change, please do not assume +// this will be bound using pybind. +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class Work extends CustomClassHolder { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public Work(Pointer p) { super(p); } + + public Work( + int rank/*=-1*/, + OpType opType/*=c10d::OpType::UNKNOWN*/, + @Cast("const char*") BytePointer profilingTitle/*=nullptr*/, + @Const @ByRef(nullValue = "std::optional >(c10::nullopt)") TensorVectorOptional inputTensors) { super((Pointer)null); allocate(rank, opType, profilingTitle, inputTensors); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( + int rank/*=-1*/, + OpType opType/*=c10d::OpType::UNKNOWN*/, + @Cast("const char*") BytePointer profilingTitle/*=nullptr*/, + @Const @ByRef(nullValue = "std::optional >(c10::nullopt)") TensorVectorOptional inputTensors); + public Work() { super((Pointer)null); allocate(); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(); + public Work( + int rank/*=-1*/, + @Cast("c10d::OpType") byte opType/*=c10d::OpType::UNKNOWN*/, + String profilingTitle/*=nullptr*/, + @Const @ByRef(nullValue = "std::optional >(c10::nullopt)") TensorVectorOptional inputTensors) { super((Pointer)null); allocate(rank, opType, profilingTitle, inputTensors); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate( + int rank/*=-1*/, + @Cast("c10d::OpType") byte opType/*=c10d::OpType::UNKNOWN*/, + String profilingTitle/*=nullptr*/, + @Const @ByRef(nullValue = "std::optional >(c10::nullopt)") TensorVectorOptional inputTensors); + + // Checks if request has completed. Non-blocking operation. + public native @Cast("bool") boolean isCompleted(); + + // Returns if the work completed successfully. + // If false, the exception function can be called to get details. + public native @Cast("bool") boolean isSuccess(); + + // Returns exception if isSuccess() returned false. + public native @ByVal @Cast("std::exception_ptr*") Pointer exception(); + + // Returns source rank if this objects represents a recv-from-any. + public native int sourceRank(); + + // Returns result tensors, if applicable. + // If work is not supposed to have result, we return empty list. + public native @ByVal TensorVector result(); + + // Ensures that operations on the output tensors that are invoked + // after this function returns are correctly sequenced after the + // asynchronous completion of this work. + // + // For CUDA tensors, it inserts stream synchronization such that + // the streams of the caller wait for completion of the + // asynchronous operations on the destination tensors. + // + // For CPU tensors, it is currently a nop. + // + // This function should only be used if the caller polls for + // completion through the `isCompleted` function, it has returned + // true, and the `isSuccess` function also has returned true. + // + public native void synchronize(); + + // Waits until request completes. Blocking operation. + // Throws if the work completed with an exception. + // Returns false if the work is aborted. + // Otherwise, it always returns true, indicating the work is completed. + // + // Functionally equivalent to: + // + // while (!isCompleted()) { /* nop */ } + // auto success = isSuccess(); + // if (!success) { std::rethrow_exception(exception()); } + // return success; + // + public native @Cast("bool") @Name("wait") boolean _wait(@ByVal(nullValue = "std::chrono::milliseconds(kNoTimeout)") Milliseconds timeout); + public native @Cast("bool") @Name("wait") boolean _wait(); + + public native void abort(); + + // Returns a Future object that will be associated with the completion of + // work. Only NCCL backend is currently supported. + public native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future getFuture(); + + public native float getDuration(); + + public native @Cast("uint64_t") long getSequencenumber(); + + public native OpType retrieveOpType(); + + public static native @IntrusivePtr("c10d::Work") @Cast({"", "c10::intrusive_ptr&"}) Work create_from_future( + @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future arg0); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WorkInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WorkInfo.java new file mode 100644 index 00000000000..d4b28652b89 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WorkInfo.java @@ -0,0 +1,58 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch; + +import org.bytedeco.pytorch.Allocator; +import org.bytedeco.pytorch.Function; +import org.bytedeco.pytorch.Module; +import org.bytedeco.javacpp.annotation.Cast; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; + +import static org.bytedeco.pytorch.global.torch.*; + + +@Namespace("c10d") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class WorkInfo extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public WorkInfo(Pointer p) { super(p); } + + public WorkInfo( + OpType opType, + @Cast("const uint64_t") long seq, + @Const @ByRef SystemTime timeStarted, + @Const @ByRef SystemTime timeFinished, + @Const @ByRef SecondsFloat activeDuration) { super((Pointer)null); allocate(opType, seq, timeStarted, timeFinished, activeDuration); } + @SharedPtr @Name("std::make_shared") private native void allocate( + OpType opType, + @Cast("const uint64_t") long seq, + @Const @ByRef SystemTime timeStarted, + @Const @ByRef SystemTime timeFinished, + @Const @ByRef SecondsFloat activeDuration); + public WorkInfo( + @Cast("c10d::OpType") byte opType, + @Cast("const uint64_t") long seq, + @Const @ByRef SystemTime timeStarted, + @Const @ByRef SystemTime timeFinished, + @Const @ByRef SecondsFloat activeDuration) { super((Pointer)null); allocate(opType, seq, timeStarted, timeFinished, activeDuration); } + @SharedPtr @Name("std::make_shared") private native void allocate( + @Cast("c10d::OpType") byte opType, + @Cast("const uint64_t") long seq, + @Const @ByRef SystemTime timeStarted, + @Const @ByRef SystemTime timeFinished, + @Const @ByRef SecondsFloat activeDuration); + + public native OpType opType(); public native WorkInfo opType(OpType setter); + public native @Cast("uint64_t") long seq(); public native WorkInfo seq(long setter); + public native @ByRef SystemTime timeStarted(); public native WorkInfo timeStarted(SystemTime setter); + public native @ByRef SystemTime timeFinished(); public native WorkInfo timeFinished(SystemTime setter); + public native @ByRef SecondsFloat activeDuration(); public native WorkInfo activeDuration(SecondsFloat setter); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/WriteableTensorData.java b/pytorch/src/gen/java/org/bytedeco/pytorch/WriteableTensorData.java index 49e11e88a8b..9ae11fb95b7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/WriteableTensorData.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/WriteableTensorData.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/XPUHooksArgs.java b/pytorch/src/gen/java/org/bytedeco/pytorch/XPUHooksArgs.java index d530dff7141..ba6ebe85014 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/XPUHooksArgs.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/XPUHooksArgs.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/XPUHooksInterface.java b/pytorch/src/gen/java/org/bytedeco/pytorch/XPUHooksInterface.java index 1fcb4111d48..3bd4380ec6f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/XPUHooksInterface.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/XPUHooksInterface.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -58,4 +59,8 @@ public class XPUHooksInterface extends Pointer { public native @ByVal Device getDeviceFromPtr(Pointer arg0); public native void deviceSynchronize(@Cast("c10::DeviceIndex") byte arg0); + + public native Allocator getPinnedMemoryAllocator(); + + public native @Cast("bool") boolean isPinnedPtr(@Const Pointer arg0); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImpl.java index aa149e13097..2e45f9eca72 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImplBase.java index be9f231d9b1..315ed2cc76d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImplCloneable.java index e3cc2eea0c2..7426e4c5809 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ZeroPad1dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dOptions.java index c4f4943e7ed..3f366319415 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad1dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImpl.java index def78305b92..f2a9c91ff80 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImplBase.java index bb513e4baf7..33877f6e22e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImplCloneable.java index a9d0b15cdcf..bdc8728969a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ZeroPad2dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dOptions.java index c68363f816b..f70933bc9fe 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad2dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImpl.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImpl.java index 2840fc1bec9..216b88a264e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImpl.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImpl.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImplBase.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImplBase.java index 0b1d5bac38c..0d9e80a03e0 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImplBase.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImplBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImplCloneable.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImplCloneable.java index ef5c990d605..33974e55958 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImplCloneable.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dImplCloneable.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; @@ -32,6 +33,6 @@ public class ZeroPad3dImplCloneable extends Module { * and submodules in the cloned module are different from those in the * original module. */ public native @SharedPtr("torch::nn::Module") @ByVal Module clone( - @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device); + @Const @ByRef(nullValue = "std::optional(c10::nullopt)") DeviceOptional device); public native @SharedPtr("torch::nn::Module") @ByVal Module clone(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dOptions.java b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dOptions.java index ec322266026..5e6244cfd1c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dOptions.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/ZeroPad3dOptions.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/DistError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/_SupplementBase.java similarity index 55% rename from pytorch/src/gen/java/org/bytedeco/pytorch/DistError.java rename to pytorch/src/gen/java/org/bytedeco/pytorch/_SupplementBase.java index 109c90321b8..dc38ee9c15e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/DistError.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/_SupplementBase.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,16 +13,20 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; -// Base error type for all distributed errors. -// These turn into DistError when they cross into Python. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) -public class DistError extends Error { +// Base class for supplementary data potentially needed by ReduceOps +@Namespace("c10d") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class) +public class _SupplementBase extends CustomClassHolder { static { Loader.load(); } + /** Default native constructor. */ + public _SupplementBase() { super((Pointer)null); allocate(); } /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public DistError(Pointer p) { super(p); } + public _SupplementBase(Pointer p) { super(p); } + @IntrusivePtr @Name("c10::make_intrusive") private native void allocate(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/attribute_iterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/attribute_iterator.java index 51a3904102f..f5d6b2f4caf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/attribute_iterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/attribute_iterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/attribute_list.java b/pytorch/src/gen/java/org/bytedeco/pytorch/attribute_list.java index 03973a835b7..2ba7f5ef2bf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/attribute_list.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/attribute_list.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/bits16.java b/pytorch/src/gen/java/org/bytedeco/pytorch/bits16.java index f893d4ca8e6..7cf71ad34b6 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/bits16.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/bits16.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/bits1x8.java b/pytorch/src/gen/java/org/bytedeco/pytorch/bits1x8.java index 9f0c9812b66..abdc6b9d98b 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/bits1x8.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/bits1x8.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/bits2x4.java b/pytorch/src/gen/java/org/bytedeco/pytorch/bits2x4.java index ba5621fe38d..7db9587b0b4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/bits2x4.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/bits2x4.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/bits4x2.java b/pytorch/src/gen/java/org/bytedeco/pytorch/bits4x2.java index 94fb2680d16..398bbb9fa01 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/bits4x2.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/bits4x2.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/bits8.java b/pytorch/src/gen/java/org/bytedeco/pytorch/bits8.java index 2a9771662da..b6374496722 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/bits8.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/bits8.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/bitset.java b/pytorch/src/gen/java/org/bytedeco/pytorch/bitset.java index 3e985663e18..77e528c9771 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/bitset.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/bitset.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/buffer_iterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/buffer_iterator.java index 44ba87ccdf1..fd7e90bac37 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/buffer_iterator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/buffer_iterator.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/buffer_list.java b/pytorch/src/gen/java/org/bytedeco/pytorch/buffer_list.java index 39f9f472621..ec554cd2bc4 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/buffer_list.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/buffer_list.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/crc64_t.java b/pytorch/src/gen/java/org/bytedeco/pytorch/crc64_t.java index b3bde3b819f..7edd613b679 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/crc64_t.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/crc64_t.java @@ -4,7 +4,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -14,6 +13,8 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import static org.bytedeco.pytorch.global.torch.*; // namespace detail diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AOTIModelContainerRunnerCuda.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AOTIModelContainerRunnerCuda.java index b99c61cbf15..f12c0999d82 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AOTIModelContainerRunnerCuda.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AOTIModelContainerRunnerCuda.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/ActivationDescriptor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/ActivationDescriptor.java index b2592f949a2..cf74111a7c8 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/ActivationDescriptor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/ActivationDescriptor.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AllocatorConfigInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AllocatorConfigInfo.java index e1cd6e5c4c7..7deffcd5883 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AllocatorConfigInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AllocatorConfigInfo.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AllocatorState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AllocatorState.java index 3268346800b..daf8762f05a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AllocatorState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/AllocatorState.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/BlockInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/BlockInfo.java index a7096104e7b..074dd23aeff 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/BlockInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/BlockInfo.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; @@ -42,8 +50,8 @@ public class BlockInfo extends Pointer { return new BlockInfo((Pointer)this).offsetAddress(i); } - public native @Cast("int64_t") long size(); public native BlockInfo size(long setter); - public native @Cast("int64_t") long requested_size(); public native BlockInfo requested_size(long setter); + public native @Cast("size_t") long size(); public native BlockInfo size(long setter); + public native @Cast("size_t") long requested_size(); public native BlockInfo requested_size(long setter); public native int gc_counter(); public native BlockInfo gc_counter(int setter); public native @Cast("bool") boolean allocated(); public native BlockInfo allocated(boolean setter); public native @Cast("bool") boolean active(); public native BlockInfo active(boolean setter); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CTCLossDescriptor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CTCLossDescriptor.java index 2d72e4c6270..c255409744c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CTCLossDescriptor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CTCLossDescriptor.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAAllocator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAAllocator.java index 845775aa5dc..be1814cdf8a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAAllocator.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAAllocator.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; @@ -29,7 +37,7 @@ public class CUDAAllocator extends Allocator { public CUDAAllocator(Pointer p) { super(p); } public native Pointer raw_alloc(@Cast("size_t") long nbytes); - public native Pointer raw_alloc_with_stream(@Cast("size_t") long nbytes, @Cast("cudaStream_t") Pointer stream); + public native Pointer raw_alloc_with_stream(@Cast("size_t") long nbytes, CUstream_st stream); public native void raw_delete(Pointer ptr); public native void init(int device_count); public native @Cast("bool") boolean initialized(); @@ -68,7 +76,7 @@ public native void recordHistory( @ByVal @Cast("c10::cuda::CUDACachingAllocator::CreateContextFn*") Pointer context_recorder, @Cast("size_t") long alloc_trace_max_entries, @Cast("c10::cuda::CUDACachingAllocator::RecordContext") int when); - public native void attachOutOfMemoryObserver(@ByVal OutOfMemoryObserver observer); + public native void attachOutOfMemoryObserver(@ByVal @Cast("c10::cuda::CUDACachingAllocator::OutOfMemoryObserver*") AllocatorTraceTracker observer); // Attached AllocatorTraceTracker callbacks will be called while the // per-device allocator lock is held. Any additional locks taken from within @@ -99,7 +107,7 @@ public native void enablePeerAccess( @Const Pointer src, int srcDevice, @Cast("size_t") long count, - @Cast("cudaStream_t") Pointer stream, + CUstream_st stream, @Cast("bool") boolean p2p_enabled); public native @SharedPtr("c10::cuda::CUDACachingAllocator::AllocatorState") @ByVal AllocatorState getCheckpointState( byte device, diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAError.java deleted file mode 100644 index 7fedcd5b6f9..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAError.java +++ /dev/null @@ -1,41 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch.cuda; - -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; -import org.bytedeco.pytorch.Allocator; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; -import org.bytedeco.pytorch.*; -import static org.bytedeco.pytorch.global.torch.*; - -import static org.bytedeco.pytorch.global.torch_cuda.*; - - -// Note [CHECK macro] -// ~~~~~~~~~~~~~~~~~~ -// This is a macro so that AT_ERROR can get accurate __LINE__ -// and __FILE__ information. We could split this into a short -// macro and a function implementation if we pass along __LINE__ -// and __FILE__, but no one has found this worth doing. - -// Used to denote errors from CUDA framework. -// This needs to be declared here instead util/Exception.h for proper conversion -// during hipify. -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch_cuda.class) -public class CUDAError extends Error { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public CUDAError(Pointer p) { super(p); } - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAEvent.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAEvent.java new file mode 100644 index 00000000000..97ce0c8b99e --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAEvent.java @@ -0,0 +1,105 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch.cuda; + +import org.bytedeco.pytorch.Allocator; +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; +import org.bytedeco.pytorch.*; +import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; + +import static org.bytedeco.pytorch.global.torch_cuda.*; + + +/* +* CUDAEvents are movable not copyable wrappers around CUDA's events. +* +* CUDAEvents are constructed lazily when first recorded unless it is +* reconstructed from a cudaIpcEventHandle_t. The event has a device, and this +* device is acquired from the first recording stream. However, if reconstructed +* from a handle, the device should be explicitly specified; or if ipc_handle() is +* called before the event is ever recorded, it will use the current device. +* Later streams that record the event must match this device. +*/ +@Namespace("at::cuda") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch_cuda.class) +public class CUDAEvent extends Pointer { + static { Loader.load(); } + /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ + public CUDAEvent(Pointer p) { super(p); } + + // Constructors + // Default value for `flags` is specified below - it's cudaEventDisableTiming + public CUDAEvent() { super((Pointer)null); allocate(); } + @NoException(true) private native void allocate(); + public CUDAEvent(@Cast("unsigned int") int flags) { super((Pointer)null); allocate(flags); } + @NoException(true) private native void allocate(@Cast("unsigned int") int flags); + + public CUDAEvent( + byte device_index, @Const cudaIpcEventHandle_t handle) { super((Pointer)null); allocate(device_index, handle); } + private native void allocate( + byte device_index, @Const cudaIpcEventHandle_t handle); + + // Note: event destruction done on creating device to avoid creating a + // CUDA context on other devices. + + + + + public CUDAEvent(@ByRef(true) CUDAEvent other) { super((Pointer)null); allocate(other); } + @NoException(true) private native void allocate(@ByRef(true) CUDAEvent other); + public native @ByRef @Name("operator =") @NoException(true) CUDAEvent put(@ByRef(true) CUDAEvent other); + + public native @Name("operator cudaEvent_t") CUevent_st asCUevent_st(); + + // Less than operator (to allow use in sets) + private static native @Namespace @Cast("bool") @Name("operator <") boolean lessThan(@Const @ByRef CUDAEvent left, @Const @ByRef CUDAEvent right); + public boolean lessThan(CUDAEvent right) { return lessThan(this, right); } + + public native @ByVal DeviceOptional device(); + + public native @Cast("bool") boolean isCreated(); + public native byte device_index(); + public native CUevent_st event(); + + // Note: cudaEventQuery can be safely called from any device + public native @Cast("bool") boolean query(); + + public native void record(); + + public native void recordOnce(@Const @ByRef CUDAStream stream); + + // Note: cudaEventRecord must be called on the same device as the event. + public native void record(@Const @ByRef CUDAStream stream); + + // Note: cudaStreamWaitEvent must be called on the same device as the stream. + // The event has no actual GPU resources associated with it. + public native void block(@Const @ByRef CUDAStream stream); + + // Note: cudaEventElapsedTime can be safely called from any device + public native float elapsed_time(@Const @ByRef CUDAEvent other); + + // Note: cudaEventSynchronize can be safely called from any device + public native void synchronize(); + + // Note: cudaIpcGetEventHandle must be called on the same device as the event + public native void ipc_handle(cudaIpcEventHandle_t handle); +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAGuard.java index 2846c39ed25..cb213f049fb 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAGuard.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchInfo.java index e0453629f9f..01b0507949e 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchInfo.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchInfoVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchInfoVector.java index 2f3078807ab..a27973cd1ef 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchInfoVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchInfoVector.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchRegistry.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchRegistry.java index 2940f07efb4..9644a6d6d89 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchRegistry.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAKernelLaunchRegistry.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAMultiStreamGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAMultiStreamGuard.java index fb7802e2001..c8d9029e009 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAMultiStreamGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAMultiStreamGuard.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStream.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStream.java index fa2a41b8961..0efb3921e79 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStream.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStream.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; @@ -59,7 +67,7 @@ public enum Unchecked { UNCHECKED(0); public native @Cast("bool") @Name("operator !=") @NoException(true) boolean notEquals(@Const @ByRef CUDAStream other); /** Implicit conversion to cudaStream_t. */ - public native @Cast("cudaStream_t") @Name("operator cudaStream_t") Pointer asPointer(); + public native @Name("operator cudaStream_t") CUstream_st asCUstream_st(); /** Implicit conversion to Stream (a.k.a., forget that the stream is a * CUDA stream). */ @@ -85,7 +93,7 @@ public enum Unchecked { UNCHECKED(0); public native int priority(); /** Explicit conversion to cudaStream_t. */ - public native @Cast("cudaStream_t") Pointer stream(); + public native CUstream_st stream(); /** Explicit conversion to Stream. */ diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamArrayRef.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamArrayRef.java index d3bb0f715da..2611b410408 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamArrayRef.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamArrayRef.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamCaptureModeGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamCaptureModeGuard.java index 6fa1734fe63..d15ee73db62 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamCaptureModeGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamCaptureModeGuard.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,15 +10,28 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; // RAII guard for "cudaStreamCaptureMode", a thread-local value // that controls the error-checking strictness of a capture. -// #if !defined(USE_ROCM) || ROCM_VERSION >= 50300 @Namespace("c10::cuda") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch_cuda.class) public class CUDAStreamCaptureModeGuard extends Pointer { static { Loader.load(); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamGuard.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamGuard.java index aedc694a0e6..560ef371efc 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamGuard.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CUDAStreamGuard.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CheckpointDelta.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CheckpointDelta.java index f10146d51dc..d10faf96785 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CheckpointDelta.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CheckpointDelta.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/Constant.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/Constant.java index 9f918a6c310..58c32a1e1cd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/Constant.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/Constant.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/ConvolutionDescriptor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/ConvolutionDescriptor.java index bb53b77d57c..a5bbb487d47 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/ConvolutionDescriptor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/ConvolutionDescriptor.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CuDNNError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CuDNNError.java deleted file mode 100644 index b8e555e4dc3..00000000000 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/CuDNNError.java +++ /dev/null @@ -1,31 +0,0 @@ -// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE - -package org.bytedeco.pytorch.cuda; - -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; -import org.bytedeco.pytorch.Allocator; -import java.nio.*; -import org.bytedeco.javacpp.*; -import org.bytedeco.javacpp.annotation.*; - -import static org.bytedeco.javacpp.presets.javacpp.*; -import static org.bytedeco.openblas.global.openblas_nolapack.*; -import static org.bytedeco.openblas.global.openblas.*; -import org.bytedeco.pytorch.*; -import static org.bytedeco.pytorch.global.torch.*; - -import static org.bytedeco.pytorch.global.torch_cuda.*; - - -@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch_cuda.class) -public class CuDNNError extends Error { - static { Loader.load(); } - /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */ - public CuDNNError(Pointer p) { super(p); } - -} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionData.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionData.java index 13d813e1177..8d458cd92ff 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionData.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionData.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsData.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsData.java index c7a9ee2b638..e6aa31ca305 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsData.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsData.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsDataVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsDataVector.java index 87487330427..256b6db047f 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsDataVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsDataVector.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsDataVectorCUDAKernelLaunchInfoVectorPair.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsDataVectorCUDAKernelLaunchInfoVectorPair.java index efefce665ca..adc662c2c44 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsDataVectorCUDAKernelLaunchInfoVectorPair.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceAssertionsDataVectorCUDAKernelLaunchInfoVectorPair.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceStats.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceStats.java index ab61f5fa58c..5f123c0051c 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceStats.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DeviceStats.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DropoutDescriptor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DropoutDescriptor.java index fe5ce0803ca..c46177439bd 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DropoutDescriptor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/DropoutDescriptor.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; @@ -44,11 +52,11 @@ public class DropoutDescriptor extends Pointer { // Initialize a dropout descriptor's RNG state. // WARNING: This function is very expensive, avoid calling this function! - public native void initialize_rng(@Cast("cudnnHandle_t") Pointer handle, float dropout, long seed, @Const @ByRef TensorOptions options); + public native void initialize_rng(cudnnContext handle, float dropout, long seed, @Const @ByRef TensorOptions options); // Restore a dropout descriptor given a dropout probability and existing RNG state. - public native void set(@Cast("cudnnHandle_t") Pointer handle, float dropout, @ByVal Tensor state_); + public native void set(cudnnContext handle, float dropout, @ByVal Tensor state_); // Restore a dropout descriptor corresponding to no dropout - public native void set_no_dropout(@Cast("cudnnHandle_t") Pointer handle); + public native void set_no_dropout(cudnnContext handle); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/FilterDescriptor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/FilterDescriptor.java index 11efaba7ffb..4802349471a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/FilterDescriptor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/FilterDescriptor.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/PointerSet.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/PointerSet.java index 3594b4e13fc..fe9022141b5 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/PointerSet.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/PointerSet.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/RNNDataDescriptor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/RNNDataDescriptor.java index a42bc32c9e9..df9e95a9335 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/RNNDataDescriptor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/RNNDataDescriptor.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/RNNDescriptor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/RNNDescriptor.java index 74596dae641..7c698b329d2 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/RNNDescriptor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/RNNDescriptor.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; @@ -41,7 +49,7 @@ public class RNNDescriptor extends Pointer { } - public native void set(@Cast("cudnnHandle_t") Pointer handle, + public native void set(cudnnContext handle, int input_size, @Cast("bool") boolean packed, int hidden_size, int proj_size, int num_layers, @ByRef(true) DropoutDescriptor dropout_desc, diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SegmentInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SegmentInfo.java index deac68b1f95..b006451397d 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SegmentInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SegmentInfo.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; @@ -42,12 +50,12 @@ public class SegmentInfo extends Pointer { } public native byte device(); public native SegmentInfo device(byte setter); - public native @Cast("int64_t") @Name("address") long _address(); public native SegmentInfo _address(long setter); - public native @Cast("int64_t") long total_size(); public native SegmentInfo total_size(long setter); - public native @Cast("int64_t") long requested_size(); public native SegmentInfo requested_size(long setter); // unrounded, actually requested size - public native @Cast("int64_t") long allocated_size(); public native SegmentInfo allocated_size(long setter); - public native @Cast("int64_t") long active_size(); public native SegmentInfo active_size(long setter); - public native @Cast("cudaStream_t") Pointer stream(); public native SegmentInfo stream(Pointer setter); + public native @Cast("size_t") @Name("address") long _address(); public native SegmentInfo _address(long setter); + public native @Cast("size_t") long total_size(); public native SegmentInfo total_size(long setter); + public native @Cast("size_t") long requested_size(); public native SegmentInfo requested_size(long setter); // unrounded, actually requested size + public native @Cast("size_t") long allocated_size(); public native SegmentInfo allocated_size(long setter); + public native @Cast("size_t") long active_size(); public native SegmentInfo active_size(long setter); + public native CUstream_st stream(); public native SegmentInfo stream(CUstream_st setter); public native @Cast("bool") boolean is_large(); public native SegmentInfo is_large(boolean setter); public native @Cast("bool") boolean is_expandable(); public native SegmentInfo is_expandable(boolean setter); public native @ByRef @Cast("c10::cuda::MempoolId_t*") DeviceAssertionsDataVectorCUDAKernelLaunchInfoVectorPair owner_private_pool_id(); public native SegmentInfo owner_private_pool_id(DeviceAssertionsDataVectorCUDAKernelLaunchInfoVectorPair setter); diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SnapshotInfo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SnapshotInfo.java index bdb993045d9..17274d99d4a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SnapshotInfo.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SnapshotInfo.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SpatialTransformerDescriptor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SpatialTransformerDescriptor.java index 40cbeffa0ac..bf4d9a5c366 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SpatialTransformerDescriptor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/SpatialTransformerDescriptor.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/Stat.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/Stat.java index aae7a155fa7..0d9899e69de 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/Stat.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/Stat.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TensorDescriptor.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TensorDescriptor.java index c1b7a56ab77..552bc584052 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TensorDescriptor.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TensorDescriptor.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TraceEntry.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TraceEntry.java index c1c219d3134..851a8d386cf 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TraceEntry.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TraceEntry.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; @@ -53,68 +61,68 @@ public enum Action { public TraceEntry( Action action, byte device, - @Cast("int64_t") long addr, + @Cast("size_t") long addr, @Cast("size_t") long size, - @Cast("cudaStream_t") Pointer stream, + CUstream_st stream, @Cast("c10::approx_time_t") long time, @SharedPtr GatheredContext context/*=nullptr*/) { super((Pointer)null); allocate(action, device, addr, size, stream, time, context); } private native void allocate( Action action, byte device, - @Cast("int64_t") long addr, + @Cast("size_t") long addr, @Cast("size_t") long size, - @Cast("cudaStream_t") Pointer stream, + CUstream_st stream, @Cast("c10::approx_time_t") long time, @SharedPtr GatheredContext context/*=nullptr*/); public TraceEntry( Action action, byte device, - @Cast("int64_t") long addr, + @Cast("size_t") long addr, @Cast("size_t") long size, - @Cast("cudaStream_t") Pointer stream, + CUstream_st stream, @Cast("c10::approx_time_t") long time) { super((Pointer)null); allocate(action, device, addr, size, stream, time); } private native void allocate( Action action, byte device, - @Cast("int64_t") long addr, + @Cast("size_t") long addr, @Cast("size_t") long size, - @Cast("cudaStream_t") Pointer stream, + CUstream_st stream, @Cast("c10::approx_time_t") long time); public TraceEntry( @Cast("c10::cuda::CUDACachingAllocator::TraceEntry::Action") int action, byte device, - @Cast("int64_t") long addr, + @Cast("size_t") long addr, @Cast("size_t") long size, - @Cast("cudaStream_t") Pointer stream, + CUstream_st stream, @Cast("c10::approx_time_t") long time, @SharedPtr GatheredContext context/*=nullptr*/) { super((Pointer)null); allocate(action, device, addr, size, stream, time, context); } private native void allocate( @Cast("c10::cuda::CUDACachingAllocator::TraceEntry::Action") int action, byte device, - @Cast("int64_t") long addr, + @Cast("size_t") long addr, @Cast("size_t") long size, - @Cast("cudaStream_t") Pointer stream, + CUstream_st stream, @Cast("c10::approx_time_t") long time, @SharedPtr GatheredContext context/*=nullptr*/); public TraceEntry( @Cast("c10::cuda::CUDACachingAllocator::TraceEntry::Action") int action, byte device, - @Cast("int64_t") long addr, + @Cast("size_t") long addr, @Cast("size_t") long size, - @Cast("cudaStream_t") Pointer stream, + CUstream_st stream, @Cast("c10::approx_time_t") long time) { super((Pointer)null); allocate(action, device, addr, size, stream, time); } private native void allocate( @Cast("c10::cuda::CUDACachingAllocator::TraceEntry::Action") int action, byte device, - @Cast("int64_t") long addr, + @Cast("size_t") long addr, @Cast("size_t") long size, - @Cast("cudaStream_t") Pointer stream, + CUstream_st stream, @Cast("c10::approx_time_t") long time); public native Action action_(); public native TraceEntry action_(Action setter); public native byte device_(); public native TraceEntry device_(byte setter); - public native @Cast("int64_t") long addr_(); public native TraceEntry addr_(long setter); // for OOM, this is the amount of free bytes reported by cuda + public native @Cast("size_t") long addr_(); public native TraceEntry addr_(long setter); // for OOM, this is the amount of free bytes reported by cuda public native @SharedPtr GatheredContext context_(); public native TraceEntry context_(GatheredContext setter); - public native @Cast("cudaStream_t") Pointer stream_(); public native TraceEntry stream_(Pointer setter); - public native @Cast("int64_t") long size_(); public native TraceEntry size_(long setter); + public native CUstream_st stream_(); public native TraceEntry stream_(CUstream_st setter); + public native @Cast("size_t") long size_(); public native TraceEntry size_(long setter); public native @ByRef trace_time_ time_(); public native TraceEntry time_(trace_time_ setter); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TraceEntryVector.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TraceEntryVector.java index 16bea8ee2f5..b65553bfe19 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TraceEntryVector.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/TraceEntryVector.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/WarningState.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/WarningState.java index afa499d7602..2a9d115b921 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/WarningState.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/WarningState.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/trace_time_.java b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/trace_time_.java index c3db85c2c51..428b3699bc7 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/trace_time_.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/cuda/trace_time_.java @@ -2,12 +2,6 @@ package org.bytedeco.pytorch.cuda; -import org.bytedeco.pytorch.*; -import org.bytedeco.pytorch.cuda.functions.*; -import org.bytedeco.pytorch.Error; -import org.bytedeco.pytorch.global.torch.DeviceType; -import org.bytedeco.pytorch.global.torch.ScalarType; -import org.bytedeco.pytorch.global.torch.MemoryFormat; import org.bytedeco.pytorch.Allocator; import java.nio.*; import org.bytedeco.javacpp.*; @@ -16,8 +10,22 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; import org.bytedeco.pytorch.*; import static org.bytedeco.pytorch.global.torch.*; +import org.bytedeco.cuda.cudart.*; +import static org.bytedeco.cuda.global.cudart.*; +import org.bytedeco.cuda.cublas.*; +import static org.bytedeco.cuda.global.cublas.*; +import org.bytedeco.cuda.cudnn.*; +import static org.bytedeco.cuda.global.cudnn.*; +import org.bytedeco.cuda.cusparse.*; +import static org.bytedeco.cuda.global.cusparse.*; +import org.bytedeco.cuda.cusolver.*; +import static org.bytedeco.cuda.global.cusolver.*; +import org.bytedeco.cuda.cupti.*; +import static org.bytedeco.cuda.global.cupti.*; import static org.bytedeco.pytorch.global.torch_cuda.*; @@ -40,6 +48,6 @@ public class trace_time_ extends Pointer { return new trace_time_((Pointer)this).offsetAddress(i); } - public native @Cast("c10::time_t") long t_(); public native trace_time_ t_(long setter); + public native @ByRef @Cast("time_t*") Pointer t_(); public native trace_time_ t_(Pointer setter); public native @Cast("c10::approx_time_t") long approx_t_(); public native trace_time_ approx_t_(long setter); } diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/global/gloo.java b/pytorch/src/gen/java/org/bytedeco/pytorch/global/gloo.java new file mode 100644 index 00000000000..b0c4af20444 --- /dev/null +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/global/gloo.java @@ -0,0 +1,479 @@ +// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE + +package org.bytedeco.pytorch.global; + +import org.bytedeco.pytorch.gloo.*; + +import java.nio.*; +import org.bytedeco.javacpp.*; +import org.bytedeco.javacpp.annotation.*; + +import static org.bytedeco.javacpp.presets.javacpp.*; +import static org.bytedeco.openblas.global.openblas_nolapack.*; +import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; +import org.bytedeco.pytorch.*; +import static org.bytedeco.pytorch.global.torch.*; + +public class gloo extends org.bytedeco.pytorch.presets.gloo { + static { Loader.load(); } + +// Parsed from gloo/common/string.h + +/** + * Copyright (c) 2017-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include +// #include + +@Namespace("gloo") public static native void MakeStringInternal(@Cast("std::stringstream*") @ByRef Pointer arg0); + +// Specializations for already-a-string types. +@Namespace("gloo") public static native @StdString BytePointer MakeString(@Cast("const char*") BytePointer cstr); +@Namespace("gloo") public static native @StdString String MakeString(String cstr); + + // namespace gloo + + +// Parsed from gloo/transport/address.h + +/** + * Copyright (c) 2017-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include +// #include +// Targeting ../gloo/Address.java + + + + // namespace transport + // namespace gloo + + +// Parsed from gloo/transport/buffer.h + +/** + * Copyright (c) 2017-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include +// Targeting ../gloo/Buffer.java + + + + // namespace transport + // namespace gloo + + +// Parsed from gloo/transport/unbound_buffer.h + +/** + * Copyright (c) 2018-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include +// #include +// #include +// #include +// #include +// Targeting ../gloo/UnboundBuffer.java + + + + // namespace transport + // namespace gloo + + +// Parsed from gloo/transport/pair.h + +/** + * Copyright (c) 2017-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include + +// #include "gloo/common/logging.h" +// #include "gloo/transport/address.h" +// #include "gloo/transport/buffer.h" +// #include "gloo/transport/unbound_buffer.h" +// Targeting ../gloo/Pair.java + + + + // namespace transport + // namespace gloo + + +// Parsed from gloo/common/common.h + +/** + * Copyright (c) 2017-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include + +// make_unique is a C++14 feature. If we don't have 14, we will emulate +// its behavior. This is copied from folly/Memory.h +// #if __cplusplus >= 201402L || +// (defined __cpp_lib_make_unique && __cpp_lib_make_unique >= 201304L) || +// (defined(_MSC_VER) && _MSC_VER >= 1900) +/* using override */ +// #else + +// Allows 'make_unique(10)'. (N3690 s20.9.1.4 p3-4) + +// Disallows 'make_unique()'. (N3690 s20.9.1.4 p5) + + +// #endif + + // namespace gloo + + +// Parsed from gloo/types.h + +/** + * Copyright (c) Facebook, Inc. and its affiliates. + */ + +// #pragma once + +// #include + +// #ifdef __CUDA_ARCH__ +// #endif + +// #include "gloo/common/common.h" + +// #ifdef _WIN32 +// #endif + +// Unlike old style collectives that are class instances that hold +// some state, the new style collectives do not need initialization +// before they can run. Instead of asking the context for a series of +// slots and storing them for later use and reuse, the new style +// collectives take a slot (or tag) argument that allows for +// concurrent execution of multiple collectives on the same context. +// +// This tag is what determines the slot numbers for the send and recv +// operations that the collectives end up executing. A single +// collective may have many send and recv operations running in +// parallel, so instead of using the specified tag verbatim, we use it +// as a prefix. Also, to avoid conflicts between collectives with the +// same tag, we have another tag prefix per collective type. Out of +// the 64 bits we can use for a slot, we use 8 of them to identify a +// collective, 32 to identify the collective tag, another 8 for use by +// the collective operation itself (allowing for 256 independent send +// and recv operations against the same point to point pair), and +// leave 16 bits unused. +// +// Below, you find constexprs for the prefix per collective type, as +// well as a way to compute slots when executing a collective. The +// slot class below captures both a prefix and a delta on that prefix +// to support addition with bounds checking. It is usable as an +// uint64_t, but one that cannot overflow beyond the bits allocated +// for use within a collective. +// + +@Namespace("gloo") @MemberGetter public static native @Cast("const uint8_t") byte kGatherSlotPrefix(); +@Namespace("gloo") @MemberGetter public static native @Cast("const uint8_t") byte kAllgatherSlotPrefix(); +@Namespace("gloo") @MemberGetter public static native @Cast("const uint8_t") byte kReduceSlotPrefix(); +@Namespace("gloo") @MemberGetter public static native @Cast("const uint8_t") byte kAllreduceSlotPrefix(); +@Namespace("gloo") @MemberGetter public static native @Cast("const uint8_t") byte kScatterSlotPrefix(); +@Namespace("gloo") @MemberGetter public static native @Cast("const uint8_t") byte kBroadcastSlotPrefix(); +@Namespace("gloo") @MemberGetter public static native @Cast("const uint8_t") byte kBarrierSlotPrefix(); +@Namespace("gloo") @MemberGetter public static native @Cast("const uint8_t") byte kAlltoallSlotPrefix(); +@Namespace("gloo") public static native @ByVal float16 cpu_float2half_rn(float f); +@Namespace("gloo") public static native float cpu_half2float(@ByVal float16 h); +// Targeting ../gloo/float16.java + + + +@Namespace("gloo") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer stream, @Const @ByRef float16 val); + +@Namespace("gloo") public static native @ByVal @Name("operator +") float16 add(@Const @ByRef float16 lhs, @Const @ByRef float16 rhs); + +@Namespace("gloo") public static native @ByVal @Name("operator -") float16 subtract(@Const @ByRef float16 lhs, @Const @ByRef float16 rhs); + +@Namespace("gloo") public static native @ByVal @Name("operator *") float16 multiply(@Const @ByRef float16 lhs, @Const @ByRef float16 rhs); + +@Namespace("gloo") public static native @ByVal @Name("operator /") float16 divide(@Const @ByRef float16 lhs, @Const @ByRef float16 rhs); + +@Namespace("gloo") public static native @Cast("bool") @Name("operator <") boolean lessThan(@Const @ByRef float16 lhs, @Const @ByRef float16 rhs); + +@Namespace("gloo") public static native @Cast("bool") @Name("operator <=") boolean lessThanEquals(@Const @ByRef float16 lhs, @Const @ByRef float16 rhs); + +@Namespace("gloo") public static native @Cast("bool") @Name("operator >") boolean greaterThan(@Const @ByRef float16 lhs, @Const @ByRef float16 rhs); + +@Namespace("gloo") public static native @Cast("bool") @Name("operator >=") boolean greaterThanEquals(@Const @ByRef float16 lhs, @Const @ByRef float16 rhs); + + // namespace gloo + + +// Parsed from gloo/math.h + +/** + * Copyright (c) 2017-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include "gloo/types.h" + + + + + + + + + + + + + + + + + +@Namespace("gloo") public static native @Cast("uint32_t") int log2ceil(@Cast("uint32_t") int value); + +// #if GLOO_USE_AVX + + + + + + + + + + + + + +// #endif + + // namespace gloo + + +// Parsed from gloo/algorithm.h + +/** + * Copyright (c) 2017-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include + +// #include "gloo/context.h" +// #include "gloo/math.h" + +public static final long kOnDeviceThreshold = 256 * 1024; +// Targeting ../gloo/Algorithm.java + + + +// Type of reduction function. +// +// If the reduction type is one of the built-ins, algorithm +// implementations may use accelerated versions if available. +// +// For example, if a ReductionFunction with ReductionType equal +// SUM is passed to CUDA aware Allreduce, it knows it can +// use a NCCL implementation instead of the specified function. +// +@Namespace("gloo") public enum ReductionType { + SUM(1), + PRODUCT(2), + MAX(3), + MIN(4), + + // Use larger number so we have plenty of room to add built-ins + CUSTOM(1000); + + public final int value; + private ReductionType(int v) { this.value = v; } + private ReductionType(ReductionType e) { this.value = e.value; } + public ReductionType intern() { for (ReductionType e : values()) if (e.value == value) return e; return this; } + @Override public String toString() { return intern().name(); } +} +// Targeting ../gloo/ReductionFunctionFloat.java + + +// Targeting ../gloo/ReductionFunctionInt.java + + + + + + + + +// Local operation. +// If an algorithm uses multiple local pointers, local operations +// can be used for local reduction, broadcast, gathering, etc. + + // namespace gloo + + +// Parsed from gloo/common/store.h + +/** + * Copyright (c) 2017-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include +// #include +// #include +// Targeting ../gloo/IStore.java + + + + // namespace gloo + + +// Parsed from gloo/rendezvous/store.h + +/** + * Copyright (c) 2017-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include +// #include +// #include + +// #include "gloo/common/logging.h" +// #include "gloo/common/error.h" +// #include "gloo/common/store.h" + +//can be used by upstream users to know whether this is available or not. +public static final int GLOO_STORE_HAS_STORE_V2 = 1; +// Targeting ../gloo/Store.java + + + + // namespace rendezvous + // namespace gloo + + +// Parsed from gloo/transport/context.h + +/** + * Copyright (c) 2018-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include + +// #include "gloo/common/store.h" +// #include "gloo/transport/pair.h" +// #include "gloo/transport/unbound_buffer.h" +// Targeting ../gloo/TransportContext.java + + + + // namespace transport + // namespace gloo + + +// Parsed from gloo/transport/device.h + +/** + * Copyright (c) 2017-present, Facebook, Inc. + * All rights reserved. + * + * This source code is licensed under the BSD-style license found in the + * LICENSE file in the root directory of this source tree. + */ + +// #pragma once + +// #include +// #include + +// #include "gloo/transport/context.h" +// #include "gloo/transport/pair.h" + +// Forward declarations +// Targeting ../gloo/Device.java + + + + // namespace transport + // namespace gloo + + +} diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/global/torch.java b/pytorch/src/gen/java/org/bytedeco/pytorch/global/torch.java index cd5d30c8eff..fa1e8fbf33a 100644 --- a/pytorch/src/gen/java/org/bytedeco/pytorch/global/torch.java +++ b/pytorch/src/gen/java/org/bytedeco/pytorch/global/torch.java @@ -6,7 +6,6 @@ import org.bytedeco.pytorch.Allocator; import org.bytedeco.pytorch.Function; -import org.bytedeco.pytorch.functions.*; import org.bytedeco.pytorch.Module; import org.bytedeco.javacpp.annotation.Cast; import java.nio.*; @@ -16,539 +15,592 @@ import static org.bytedeco.javacpp.presets.javacpp.*; import static org.bytedeco.openblas.global.openblas_nolapack.*; import static org.bytedeco.openblas.global.openblas.*; +import org.bytedeco.javacpp.chrono.*; +import static org.bytedeco.javacpp.global.chrono.*; public class torch extends org.bytedeco.pytorch.presets.torch { static { Loader.load(); } -// Targeting ../InlinedCallStackOptional.java +// Targeting ../T_TensorTensor_TOptional.java -// Targeting ../BatchSizeOptional.java +// Targeting ../TensorDeque.java -// Targeting ../BoolOptional.java +// Targeting ../RecordFunctionHandleIntList.java -// Targeting ../ByteOptional.java +// Targeting ../StringStringMap.java -// Targeting ../IntOptional.java +// Targeting ../StringLongMap.java -// Targeting ../LongOptional.java +// Targeting ../StringTensorMap.java -// Targeting ../FloatOptional.java +// Targeting ../ActivityTypeSet.java -// Targeting ../DoubleOptional.java +// Targeting ../DimnameVector.java -// Targeting ../SizeTOptional.java +// Targeting ../FunctionPreHookVector.java -// Targeting ../StringOptional.java +// Targeting ../FunctionPostHookVector.java -// Targeting ../BoolVectorOptional.java +// Targeting ../DefVector.java -// Targeting ../LongVectorOptional.java +// Targeting ../PropertyVector.java -// Targeting ../DoubleVectorOptional.java +// Targeting ../OptimizerParamGroupVector.java -// Targeting ../SizeTVectorOptional.java +// Targeting ../FunctionSchemaVector.java -// Targeting ../StringVectorOptional.java +// Targeting ../DataPtrVector.java -// Targeting ../StrideVectorOptional.java +// Targeting ../WeakStorageVector.java -// Targeting ../ShapeSymbolVectorOptional.java +// Targeting ../StringTensorDictItemVector.java -// Targeting ../TensorVectorOptional.java +// Targeting ../StringAnyModuleDictItemVector.java -// Targeting ../DeviceOptional.java +// Targeting ../StringSharedModuleDictItemVector.java -// Targeting ../DeviceTypeOptional.java +// Targeting ../BoolVector.java -// Targeting ../LongArrayRefOptional.java +// Targeting ../ByteVector.java -// Targeting ../DoubleArrayRefOptional.java +// Targeting ../BytePointerVector.java -// Targeting ../SymIntArrayRefOptional.java +// Targeting ../LongVector.java -// Targeting ../LayoutOptional.java +// Targeting ../DoubleVector.java -// Targeting ../MemoryFormatOptional.java +// Targeting ../SizeTVector.java -// Targeting ../ScalarOptional.java +// Targeting ../StringVector.java -// Targeting ../ScalarTypeOptional.java +// Targeting ../StringViewVector.java -// Targeting ../AliasInfoOptional.java +// Targeting ../StringLongVector.java -// Targeting ../IValueOptional.java +// Targeting ../IValueVector.java -// Targeting ../CppSignatureOptional.java +// Targeting ../QEngineVector.java -// Targeting ../DispatchKeyOptional.java +// Targeting ../ScalarTypeVector.java -// Targeting ../OperatorHandleOptional.java +// Targeting ../SymbolVector.java -// Targeting ../OperatorNameOptional.java +// Targeting ../LongOptionalVector.java -// Targeting ../QualifiedNameOptional.java +// Targeting ../IValueOptionalVector.java -// Targeting ../StreamOptional.java +// Targeting ../SharedClassTypeVector.java -// Targeting ../StrideOptional.java +// Targeting ../TypeVector.java -// Targeting ../TypePtrOptional.java +// Targeting ../StrideVector.java -// Targeting ../ClassTypePropertyOptional.java +// Targeting ../ShapeSymbolVector.java -// Targeting ../AliasTypeSetOptional.java +// Targeting ../TensorImplVector.java -// Targeting ../FunctionSchemaOptional.java +// Targeting ../EdgeVector.java -// Targeting ../SymDimVectorOptional.java +// Targeting ../TensorVector.java -// Targeting ../SymIntOptional.java +// Targeting ../TensorIndexVector.java -// Targeting ../IValueOptional.java +// Targeting ../TensorOptionalVector.java -// Targeting ../DimVectorOptional.java +// Targeting ../FunctionVector.java -// Targeting ../DimnameOptional.java +// Targeting ../GraphVector.java -// Targeting ../DimnameListOptional.java +// Targeting ../OperatorVector.java -// Targeting ../GeneratorOptional.java +// Targeting ../ResolverVector.java -// Targeting ../TensorOptional.java +// Targeting ../ValueVector.java -// Targeting ../TensorArrayRefOptional.java +// Targeting ../JitNodeVector.java -// Targeting ../TypeMetaOptional.java +// Targeting ../AnyModuleVector.java -// Targeting ../ExecutorExecutionModeOptional.java +// Targeting ../SharedModuleVector.java -// Targeting ../ScopeOptional.java +// Targeting ../StringTensorVector.java -// Targeting ../ModuleInstanceInfoOptional.java +// Targeting ../StringAnyModuleVector.java -// Targeting ../SourceRangeOptional.java +// Targeting ../StringSharedModuleVector.java -// Targeting ../MethodOptional.java +// Targeting ../FusionStrategy.java -// Targeting ../NamedValueOptional.java +// Targeting ../SymIntVector.java -// Targeting ../ValueOptional.java +// Targeting ../SharedSugaredValueVector.java -// Targeting ../LongExpandingArrayOptional.java +// Targeting ../TagVector.java -// Targeting ../DoubleExpandingArrayOptional.java +// Targeting ../ReadAdapterInterfaceVector.java -// Targeting ../T_StringSizeTSizeT_TOptional.java +// Targeting ../SizeTVectorVector.java -// Targeting ../T_TypePtrLong_TOptional.java +// Targeting ../LongArrayRefVector.java -// Targeting ../StringViewOptional.java +// Targeting ../FutureVector.java -// Targeting ../StringViewVectorOptional.java +// Targeting ../SymNodeVector.java -// Targeting ../PointerPairOptional.java +// Targeting ../GlooDeviceVector.java -// Targeting ../WeakStorageVectorOptional.java +// Targeting ../ExampleVector.java -// Targeting ../CppSignatureOptional.java +// Targeting ../TensorExampleVector.java -// Targeting ../SafePyObjectOptional.java +// Targeting ../ExampleVector.java -// Targeting ../BytePointerPairOptional.java +// Targeting ../StringTensorPair.java -// Targeting ../ExampleOptional.java +// Targeting ../StringAnyModulePair.java -// Targeting ../ExampleVectorOptional.java +// Targeting ../StringSharedModulePair.java -// Targeting ../TensorExampleOptional.java +// Targeting ../RecordFunctionHandleIntPair.java -// Targeting ../TensorExampleVectorOptional.java +// Targeting ../PointerPair.java -// Targeting ../T_TensorTensor_TOptional.java +// Targeting ../SizeTMatchedSchemaPair.java -// Targeting ../TensorDeque.java +// Targeting ../BytePointerPair.java -// Targeting ../RecordFunctionHandleIntList.java +// Targeting ../EnumNameValue.java -// Targeting ../StringStringMap.java +// Targeting ../IntPair.java -// Targeting ../StringLongMap.java +// Targeting ../T_DataPtrSizeT_T.java -// Targeting ../ActivityTypeSet.java +// Targeting ../T_IntInt_T.java -// Targeting ../DimnameVector.java +// Targeting ../T_LongLong_T.java -// Targeting ../FunctionPreHookVector.java +// Targeting ../T_TensorTensor_T.java -// Targeting ../FunctionPostHookVector.java +// Targeting ../T_TensorTensorTensor_T.java -// Targeting ../DefVector.java +// Targeting ../T_TensorTensorTensorTensor_T.java -// Targeting ../PropertyVector.java +// Targeting ../T_TensorTensorTensorTensorTensor_T.java -// Targeting ../OptimizerParamGroupVector.java +// Targeting ../T_TensorTensorTensorTensorTensorTensorTensor_T.java -// Targeting ../FunctionSchemaVector.java +// Targeting ../T_TensorTensorTensorTensorVector_T.java -// Targeting ../DataPtrVector.java +// Targeting ../T_TensorTensorDoubleLong_T.java -// Targeting ../WeakStorageVector.java +// Targeting ../T_TensorT_TensorTensor_T_T.java -// Targeting ../StringTensorDictItemVector.java +// Targeting ../T_TensorMaybeOwnedTensorMaybeOwned_T.java -// Targeting ../StringAnyModuleDictItemVector.java +// Targeting ../T_TensorMaybeOwnedTensorMaybeOwnedTensorMaybeOwned_T.java -// Targeting ../StringSharedModuleDictItemVector.java +// Targeting ../T_PackedSequenceTensor_T.java -// Targeting ../BoolVector.java +// Targeting ../T_PackedSequenceT_TensorTensor_T_T.java -// Targeting ../BytePointerVector.java +// Targeting ../T_StringSizeTSizeT_T.java -// Targeting ../LongVector.java +// Targeting ../T_TensorTensorVector_T.java -// Targeting ../DoubleVector.java +// Targeting ../T_TensorTensorVectorTensorVector_T.java -// Targeting ../SizeTVector.java +// Targeting ../T_TypePtrLong_T.java -// Targeting ../StringVector.java +// Targeting ../T_SafePyObjectTorchDispatchModeKey_T.java -// Targeting ../StringViewVector.java +// Targeting ../T_SizeTVectorVectorSizeTVector_T.java -// Targeting ../StringLongVector.java +// Targeting ../T_PyObject_TorchDispatchModeTorchDispatchModeKey_T.java -// Targeting ../IValueVector.java +// Targeting ../NodeNodeCallMap.java -// Targeting ../QEngineVector.java +// Targeting ../SizeTStringMap.java -// Targeting ../ScalarTypeVector.java +// Targeting ../HashAliasedIValueMap.java -// Targeting ../SymbolVector.java +// Targeting ../StringBoolMap.java -// Targeting ../LongOptionalVector.java +// Targeting ../StringSizeTMap.java -// Targeting ../IValueOptionalVector.java +// Targeting ../ExtraFilesMap.java -// Targeting ../SharedClassTypeVector.java +// Targeting ../TypeEnv.java -// Targeting ../TypeVector.java +// Targeting ../StringIValueMap.java -// Targeting ../StrideVector.java +// Targeting ../StringValueMap.java -// Targeting ../ShapeSymbolVector.java +// Targeting ../ValueValueMap.java -// Targeting ../TensorImplVector.java +// Targeting ../ArgumentSpecExecutionPlanMap.java -// Targeting ../EdgeVector.java +// Targeting ../TreeStringMap.java -// Targeting ../TensorVector.java +// Targeting ../StringIntMap.java -// Targeting ../TensorIndexVector.java +// Targeting ../HashIdentityIValueMap.java -// Targeting ../TensorOptionalVector.java +// Targeting ../StringSet.java -// Targeting ../FunctionVector.java +// Targeting ../HashAliasedIValues.java -// Targeting ../GraphVector.java +// Targeting ../SymbolSet.java -// Targeting ../OperatorVector.java +// Targeting ../TensorImplSet.java -// Targeting ../ResolverVector.java +// Targeting ../NodeSet.java -// Targeting ../ValueVector.java +// Targeting ../DeviceTypeSet.java -// Targeting ../JitNodeVector.java +// Targeting ../ShortSet.java -// Targeting ../AnyModuleVector.java +// Targeting ../InlinedCallStackOptional.java -// Targeting ../SharedModuleVector.java +// Targeting ../BatchSizeOptional.java -// Targeting ../StringTensorVector.java +// Targeting ../BoolOptional.java -// Targeting ../StringAnyModuleVector.java +// Targeting ../ByteOptional.java -// Targeting ../StringSharedModuleVector.java +// Targeting ../IntOptional.java -// Targeting ../FusionStrategy.java +// Targeting ../LongOptional.java -// Targeting ../SymIntVector.java +// Targeting ../FloatOptional.java -// Targeting ../SharedSugaredValueVector.java +// Targeting ../DoubleOptional.java -// Targeting ../TagVector.java +// Targeting ../SizeTOptional.java -// Targeting ../ReadAdapterInterfaceVector.java +// Targeting ../StringOptional.java -// Targeting ../ExampleVector.java +// Targeting ../BoolVectorOptional.java -// Targeting ../TensorExampleVector.java +// Targeting ../LongVectorOptional.java -// Targeting ../ExampleVector.java +// Targeting ../DoubleVectorOptional.java -// Targeting ../StringTensorPair.java +// Targeting ../SizeTVectorOptional.java -// Targeting ../StringAnyModulePair.java +// Targeting ../StringVectorOptional.java -// Targeting ../StringSharedModulePair.java +// Targeting ../StrideVectorOptional.java -// Targeting ../RecordFunctionHandleIntPair.java +// Targeting ../ShapeSymbolVectorOptional.java -// Targeting ../PointerPair.java +// Targeting ../TensorVectorOptional.java -// Targeting ../SizeTMatchedSchemaPair.java +// Targeting ../DeviceOptional.java -// Targeting ../BytePointerPair.java +// Targeting ../DeviceTypeOptional.java -// Targeting ../EnumNameValue.java +// Targeting ../LongArrayRefOptional.java -// Targeting ../T_DataPtrSizeT_T.java +// Targeting ../DoubleArrayRefOptional.java -// Targeting ../T_IntInt_T.java +// Targeting ../SymIntArrayRefOptional.java -// Targeting ../T_LongLong_T.java +// Targeting ../LayoutOptional.java -// Targeting ../T_TensorTensor_T.java +// Targeting ../MemoryFormatOptional.java -// Targeting ../T_TensorTensorTensor_T.java +// Targeting ../ScalarOptional.java -// Targeting ../T_TensorTensorTensorTensor_T.java +// Targeting ../ScalarTypeOptional.java -// Targeting ../T_TensorTensorTensorTensorTensor_T.java +// Targeting ../AliasInfoOptional.java -// Targeting ../T_TensorTensorTensorTensorTensorTensorTensor_T.java +// Targeting ../IValueOptional.java -// Targeting ../T_TensorTensorTensorTensorVector_T.java +// Targeting ../CppSignatureOptional.java -// Targeting ../T_TensorTensorDoubleLong_T.java +// Targeting ../DispatchKeyOptional.java -// Targeting ../T_TensorT_TensorTensor_T_T.java +// Targeting ../OperatorHandleOptional.java -// Targeting ../T_TensorMaybeOwnedTensorMaybeOwned_T.java +// Targeting ../OperatorNameOptional.java -// Targeting ../T_TensorMaybeOwnedTensorMaybeOwnedTensorMaybeOwned_T.java +// Targeting ../QualifiedNameOptional.java -// Targeting ../T_PackedSequenceTensor_T.java +// Targeting ../StreamOptional.java -// Targeting ../T_PackedSequenceT_TensorTensor_T_T.java +// Targeting ../StrideOptional.java -// Targeting ../T_StringSizeTSizeT_T.java +// Targeting ../TypePtrOptional.java -// Targeting ../T_TensorTensorVector_T.java +// Targeting ../ClassTypePropertyOptional.java -// Targeting ../T_TensorTensorVectorTensorVector_T.java +// Targeting ../AliasTypeSetOptional.java -// Targeting ../T_TypePtrLong_T.java +// Targeting ../FunctionSchemaOptional.java -// Targeting ../T_SafePyObjectTorchDispatchModeKey_T.java +// Targeting ../SymDimVectorOptional.java -// Targeting ../HashAliasedIValueMap.java +// Targeting ../SymIntOptional.java -// Targeting ../StringBoolMap.java +// Targeting ../IValueOptional.java -// Targeting ../StringSizeTMap.java +// Targeting ../DimVectorOptional.java -// Targeting ../ExtraFilesMap.java +// Targeting ../DimnameOptional.java -// Targeting ../TypeEnv.java +// Targeting ../DimnameListOptional.java -// Targeting ../StringIValueMap.java +// Targeting ../GeneratorOptional.java -// Targeting ../StringValueMap.java +// Targeting ../TensorOptional.java -// Targeting ../ValueValueMap.java +// Targeting ../TensorArrayRefOptional.java -// Targeting ../ArgumentSpecExecutionPlanMap.java +// Targeting ../TypeMetaOptional.java -// Targeting ../TreeRefStringMap.java +// Targeting ../ExecutorExecutionModeOptional.java -// Targeting ../StringIntMap.java +// Targeting ../ScopeOptional.java -// Targeting ../StringSet.java +// Targeting ../ModuleInstanceInfoOptional.java -// Targeting ../HashAliasedIValues.java +// Targeting ../SourceRangeOptional.java -// Targeting ../SymbolSet.java +// Targeting ../MethodOptional.java -// Targeting ../TensorImplSet.java +// Targeting ../NamedValueOptional.java -// Targeting ../NodeSet.java +// Targeting ../ValueOptional.java -// Targeting ../DeviceTypeSet.java +// Targeting ../LongExpandingArrayOptional.java + + +// Targeting ../DoubleExpandingArrayOptional.java + + +// Targeting ../T_StringSizeTSizeT_TOptional.java + + +// Targeting ../T_TypePtrLong_TOptional.java + + +// Targeting ../StringViewOptional.java + + +// Targeting ../StringViewVectorOptional.java + + +// Targeting ../PointerPairOptional.java + + +// Targeting ../WeakStorageVectorOptional.java + + +// Targeting ../CppSignatureOptional.java + + +// Targeting ../SafePyObjectOptional.java + + +// Targeting ../BytePointerPairOptional.java + + +// Targeting ../DistributedBackendOptional.java + + +// Targeting ../LoggerOptional.java + + +// Targeting ../PyObject_TorchDispatchModeOptional.java + + +// Targeting ../ExampleOptional.java + + +// Targeting ../ExampleVectorOptional.java + + +// Targeting ../TensorExampleOptional.java + + +// Targeting ../TensorExampleVectorOptional.java // Targeting ../Nonlinearity.java @@ -875,8 +927,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #define C10_STRINGIZE(x) C10_STRINGIZE_IMPL(x) /** - * C10_ANONYMOUS_VARIABLE(str) introduces an identifier starting with - * str and ending with a number that varies with the line. + * C10_ANONYMOUS_VARIABLE(str) introduces a new identifier which starts with + * str and ends with a unique number. */ // #ifdef __COUNTER__ // #else @@ -1048,8 +1100,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // CUDA_KERNEL_ASSERT checks the assertion // even when NDEBUG is defined. This is useful for important assertions in CUDA // code that would otherwise be suppressed when building Release. -// #if defined(__ANDROID__) || defined(__APPLE__) || defined(__FreeBSD__) || -// (defined(USE_ROCM) && ROCM_VERSION < 40100) +// #if defined(__ANDROID__) || defined(__APPLE__) || defined(__FreeBSD__) // Those platforms do not support assert() // #define CUDA_KERNEL_ASSERT(cond) // #define SYCL_KERNEL_ASSERT(cond) @@ -1207,6 +1258,72 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #endif // C10_MACROS_MACROS_H_ +// Parsed from c10/util/Lazy.h + +// #pragma once + +// #include +// #include + +/** + * Thread-safe lazy value with opportunistic concurrency: on concurrent first + * access, the factory may be called by multiple threads, but only one result is + * stored and its reference returned to all the callers. + * + * Value is heap-allocated; this optimizes for the case in which the value is + * never actually computed. + */ +// Targeting ../Backtrace.java + + + +/** + * Convenience thread-safe LazyValue implementation with opportunistic + * concurrency. + */ + +/** + * Convenience immutable (thus thread-safe) LazyValue implementation for cases + * in which the value is not actually lazy. + */ + + // namespace c10 + + +// Parsed from c10/util/Backtrace.h + +// #ifndef C10_UTIL_BACKTRACE_H_ +// #define C10_UTIL_BACKTRACE_H_ + +// #include +// #include +// #include +// #include + +// #include +// #include + +// Symbolizing the backtrace can be expensive; pass it around as a lazy string +// so it is symbolized only if actually needed. + +// DEPRECATED: Prefer get_lazy_backtrace(). +@Namespace("c10") public static native @StdString BytePointer get_backtrace( + @Cast("size_t") long frames_to_skip/*=0*/, + @Cast("size_t") long maximum_number_of_frames/*=64*/, + @Cast("bool") boolean skip_python_frames/*=true*/); +@Namespace("c10") public static native @StdString BytePointer get_backtrace(); + +@Namespace("c10") public static native @SharedPtr("const c10::LazyValue") @ByVal Backtrace get_lazy_backtrace( + @Cast("size_t") long frames_to_skip/*=0*/, + @Cast("size_t") long maximum_number_of_frames/*=64*/, + @Cast("bool") boolean skip_python_frames/*=true*/); +@Namespace("c10") public static native @SharedPtr("const c10::LazyValue") @ByVal Backtrace get_lazy_backtrace(); + + // namespace c10 + +// #endif // C10_UTIL_BACKTRACE_H_ + + // Parsed from c10/core/DeviceType.h // #pragma once @@ -1251,7 +1368,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { IDEEP((byte)(5)), // IDEEP. HIP((byte)(6)), // AMD HIP FPGA((byte)(7)), // FPGA - ORT((byte)(8)), // ONNX Runtime / Microsoft + MAIA((byte)(8)), // ONNX Runtime / Microsoft XLA((byte)(9)), // XLA / TPU Vulkan((byte)(10)), // Vulkan Metal((byte)(11)), // Metal @@ -1281,7 +1398,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("c10") @MemberGetter public static native DeviceType kCUDA(); @Namespace("c10") @MemberGetter public static native DeviceType kHIP(); @Namespace("c10") @MemberGetter public static native DeviceType kFPGA(); -@Namespace("c10") @MemberGetter public static native DeviceType kORT(); +@Namespace("c10") @MemberGetter public static native DeviceType kMAIA(); @Namespace("c10") @MemberGetter public static native DeviceType kXLA(); @Namespace("c10") @MemberGetter public static native DeviceType kMPS(); @Namespace("c10") @MemberGetter public static native DeviceType kMeta(); @@ -1478,19 +1595,27 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include +// #include +// #include // #include // #include // #include +// #include // #include // #include // #include // #if defined(_MSC_VER) && _MSC_VER <= 1900 // #endif -// Targeting ../Error.java - +/** The primary ATen error class. + * Provides a complete error message with source location information via + * {@code what()}, and a more concise message via {@code what_without_backtrace()}. + * Don't throw this directly; use TORCH_CHECK/TORCH_INTERNAL_ASSERT instead. + * + * NB: c10::Error is handled specially by the default torch to suppress the + * backtrace, see torch/csrc/Exceptions.h */ // Targeting ../Warning.java @@ -1536,46 +1661,44 @@ public class torch extends org.bytedeco.pytorch.presets.torch { + // namespace WarningUtils -// Targeting ../ErrorAlwaysShowCppStacktrace.java - - -// Targeting ../IndexError.java - - -// Targeting ../ValueError.java - - -// Targeting ../TypeError.java - - -// Targeting ../NotImplementedError.java +// Like Error, but we always report the C++ backtrace, instead of only +// reporting when TORCH_SHOW_CPP_STACKTRACES +// Used in ATen for out-of-bound indices that can reasonably only be detected +// lazily inside a kernel (See: advanced indexing). These turn into +// IndexError when they cross to Python. -// Targeting ../EnforceFiniteError.java +// Used in ATen for invalid values. These turn into +// ValueError when they cross to Python. +// Used in ATen for invalid types. These turn into +// TypeError when they cross to Python. -// Targeting ../OnnxfiBackendSystemError.java +// Used in ATen for functionality that is not implemented. These turn into +// NotImplementedError when they cross to Python. +// Used in ATen for non finite indices. These turn into +// ExitException when they cross to Python. -// Targeting ../LinAlgError.java +// Used in Onnxifi backend lowering. These turn into +// ExitException when they cross to Python. +// Used for numerical errors from the linalg module. These +// turn into LinAlgError when they cross into Python. -// Targeting ../OutOfMemoryError.java +// Base error type for all distributed errors. +// These turn into DistError when they cross into Python. +// Used for collective communication library errors from the distributed module. +// These turn into DistBackendError when they cross into Python. -// Targeting ../DistError.java - - -// Targeting ../DistBackendError.java - - -// Targeting ../DistStoreError.java - - -// Targeting ../DistNetworkError.java - +// Used for errors originating from the store. +// These turn into DistStoreError when they cross into Python. +// Used for errors originating from the TCP/IP stack and not from collective +// libraries. These turn into DistNetworkError when they cross into Python. // A utility function to return an exception std::string by prepending its // exception type before its what() content @@ -2204,13 +2327,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // https://gitlab.com/pytorch-complex/vitis_kernels // TODO: put this in BackendComponents - // ONNX Runtime, lives out of tree at https://github.com/pytorch/ort and - // https://github.com/microsoft/onnxruntime, and is also used to test general - // backend/extension machinery in the core. cf: - // - test/cpp_extensions/ort_extension.cpp + // MAIA backend lives out of tree + // - test/cpp_extensions/maia_extension.cpp // - test/test_torch.py // - aten/src/ATen/test/extension_backend_test.cpp - ORT((short)(Undefined.value + 3)), + MAIA((short)(Undefined.value + 3)), Vulkan((short)(Undefined.value + 4)), // TODO: put this in BackendComponents Metal((short)(Undefined.value + 5)), // TODO: put this in BackendComponents @@ -3279,7 +3400,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { SparseCsrVE(16), SparseCsrXPU(17), SparseCsrPrivateUse1(18), - ORT(19), + MAIA(19), XLA(20), Vulkan(21), Metal(22), @@ -3313,7 +3434,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("c10") public static native DeviceType backendToDeviceType(Backend b); @Namespace("c10") public static native @Cast("c10::DeviceType") byte backendToDeviceType(@Cast("c10::Backend") int b); -// TODO: This probably shouldn't actually be static inline @Namespace("c10") public static native @Cast("const char*") BytePointer toString(Backend b); @Namespace("c10") public static native String toString(@Cast("c10::Backend") int b); @@ -3473,9 +3593,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include - -// #if C10_CLANG_HAS_WARNING("-Wshorten-64-to-32") -// #endif // Targeting ../IntSizedSmallVectorBase.java @@ -3556,7 +3673,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // end namespace std - // Parsed from c10/util/ArrayRef.h //===--- ArrayRef.h - Array Reference Wrapper -------------------*- C++ -*-===// @@ -3625,7 +3741,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // Targeting ../FloatComplexArrayRef.java -// Targeting ../FuturePtrArrayRef.java +// Targeting ../FutureArrayRef.java // Targeting ../HalfArrayRef.java @@ -3955,6 +4071,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include +// #include // #if defined(__CUDACC__) && !defined(USE_ROCM) // #endif @@ -3971,6 +4089,10 @@ public class torch extends org.bytedeco.pytorch.presets.torch { +@Namespace("c10") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft( + @Cast("std::ostream*") @ByRef Pointer out, + @Const @ByRef BFloat16 value); + // namespace c10 // #include // IWYU pragma: keep @@ -4338,7 +4460,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include -// #if defined(__cplusplus) && (__cplusplus >= 201103L) +// #if defined(__cplusplus) // #include // #include // #elif !defined(__OPENCL_VERSION__) @@ -4382,7 +4504,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { -@Namespace("c10") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer out, @Const @ByRef Float8_e4m3fn value); +@Namespace("c10") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft( + @Cast("std::ostream*") @ByRef Pointer out, + @Const @ByRef Float8_e4m3fn value); // namespace c10 @@ -4397,6 +4521,10 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include +// #if defined(SYCL_LANGUAGE_VERSION) +// #include +// #endif + /* * Convert a 8-bit floating-point number in either f8 E4M3FNUZ or bf8 E5M2FNUZ * format, in bit representation, to a 32-bit floating-point number. @@ -4557,7 +4685,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include -// #if defined(__cplusplus) && (__cplusplus >= 201103L) +// #if defined(__cplusplus) // #include // #elif !defined(__OPENCL_VERSION__) // #include @@ -4987,11 +5115,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include -// #if defined(__cplusplus) && (__cplusplus >= 201103L) +// #if defined(__cplusplus) // #include // #elif !defined(__OPENCL_VERSION__) // #include @@ -5005,6 +5134,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #ifdef __CUDACC__ // #include @@ -5270,7 +5400,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { -@Namespace("c10") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer out, @Const @ByRef Float8_e5m2 value); +@Namespace("c10") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft( + @Cast("std::ostream*") @ByRef Pointer out, + @Const @ByRef Float8_e5m2 value); // namespace c10 @@ -5430,7 +5562,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include -// #if defined(__cplusplus) && (__cplusplus >= 201103L) +// #if defined(__cplusplus) // #include // #elif !defined(__OPENCL_VERSION__) // #include @@ -5833,87 +5965,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // ::c10::ScalarType::SCALARTYPE3>::t), // SCALARTYPE3) -// #define AT_FORALL_SCALAR_TYPES_AND4( -// SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, SCALARTYPE4, _) -// _(uint8_t, Byte) -// _(int8_t, Char) -// _(int16_t, Short) -// _(int, Int) -// _(int64_t, Long) -// _(float, Float) -// _(double, Double) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE1>::t), -// SCALARTYPE1) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE2>::t), -// SCALARTYPE2) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE3>::t), -// SCALARTYPE3) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE4>::t), -// SCALARTYPE4) - -// #define AT_FORALL_SCALAR_TYPES_AND5( -// SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, SCALARTYPE4, SCALARTYPE5, _) -// _(uint8_t, Byte) -// _(int8_t, Char) -// _(int16_t, Short) -// _(int, Int) -// _(int64_t, Long) -// _(float, Float) -// _(double, Double) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE1>::t), -// SCALARTYPE1) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE2>::t), -// SCALARTYPE2) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE3>::t), -// SCALARTYPE3) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE4>::t), -// SCALARTYPE4) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE5>::t), -// SCALARTYPE5) - -// #define AT_FORALL_SCALAR_TYPES_AND6( -// SCALARTYPE1, -// SCALARTYPE2, -// SCALARTYPE3, -// SCALARTYPE4, -// SCALARTYPE5, -// SCALARTYPE6, -// _) -// _(uint8_t, Byte) -// _(int8_t, Char) -// _(int16_t, Short) -// _(int, Int) -// _(int64_t, Long) -// _(float, Float) -// _(double, Double) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE1>::t), -// SCALARTYPE1) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE2>::t), -// SCALARTYPE2) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE3>::t), -// SCALARTYPE3) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE4>::t), -// SCALARTYPE4) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE5>::t), -// SCALARTYPE5) -// _(decltype(::c10::impl::ScalarTypeToCPPType< -// ::c10::ScalarType::SCALARTYPE6>::t), -// SCALARTYPE6) - // #define AT_FORALL_SCALAR_TYPES_AND7( // SCALARTYPE1, // SCALARTYPE2, @@ -6096,7 +6147,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include -// Targeting ../SymNodeImpl.java +// Targeting ../SymNode.java @@ -6370,6 +6421,10 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // including float to integral overflow and signed to unsigned integer overflow. // Some of this undefined behavior is addressed below. +// Partial template specialization for casting to bool. +// Need to handle complex types separately, as we don't +// simply want to cast the real part to bool. + // Partial template instantiation for casting to uint8. // Note: Converting from negative float values to unsigned integer types is // undefined behavior in C++, and current CPU and GPU compilers exhibit @@ -7176,7 +7231,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // storage's DataPtr has some context (`DataPtr::get_context()`) which is not // equal to the data pointer (`DataPtr::get()`). In this case, a nullptr is // returned. -@Namespace("c10::impl::cow") public static native @ByVal StorageImplPtr lazy_clone_storage( +@Namespace("c10::impl::cow") public static native @IntrusivePtr("c10::StorageImpl") @Cast({"", "c10::intrusive_ptr&"}) StorageImpl lazy_clone_storage( @ByRef StorageImpl storage); // Check if a storage has a simple DataPtr with no abnormal context @@ -7213,6 +7268,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include + +@Namespace("c10") public static native void throwNullDataPtrError(); +@Namespace("c10") public static native void warnDeprecatedDataPtr(); // Targeting ../StorageImpl.java @@ -7223,7 +7281,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { -@Namespace("c10") public static native @ByVal StorageImplPtr make_storage_impl( +@Namespace("c10") public static native @IntrusivePtr("c10::StorageImpl") @Cast({"", "c10::intrusive_ptr&"}) StorageImpl make_storage_impl( @ByVal StorageImpl.use_byte_size_t use_byte_size, @ByVal SymInt size_bytes, @StdMove DataPtr data_ptr, @@ -8170,6 +8228,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include +// #include // #include // #include // #include @@ -8224,51 +8283,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("c10") public static native @Cast("bool") boolean InitCaffeLogging(int[] argc, @Cast("char**") @ByPtrPtr byte[] argv); @Namespace("c10") public static native void UpdateLoggingLevelsFromFlags(); -@Namespace("c10") public static native void ThrowEnforceNotMet( - @Cast("const char*") BytePointer file, - int line, - @Cast("const char*") BytePointer condition, - @StdString BytePointer msg, - @Const Pointer caller/*=nullptr*/); -@Namespace("c10") public static native void ThrowEnforceNotMet( - @Cast("const char*") BytePointer file, - int line, - @Cast("const char*") BytePointer condition, - @StdString BytePointer msg); -@Namespace("c10") public static native void ThrowEnforceNotMet( - String file, - int line, - String condition, - @StdString String msg, - @Const Pointer caller/*=nullptr*/); -@Namespace("c10") public static native void ThrowEnforceNotMet( - String file, - int line, - String condition, - @StdString String msg); -@Namespace("c10") public static native void ThrowEnforceNotMet( - @Cast("const char*") BytePointer file, - int line, - @Cast("const char*") BytePointer condition, - @ByVal CompileTimeEmptyString arg3, - @Const Pointer caller/*=nullptr*/); -@Namespace("c10") public static native void ThrowEnforceNotMet( - @Cast("const char*") BytePointer file, - int line, - @Cast("const char*") BytePointer condition, - @ByVal CompileTimeEmptyString arg3); -@Namespace("c10") public static native void ThrowEnforceNotMet( - String file, - int line, - String condition, - @ByVal CompileTimeEmptyString arg3, - @Const Pointer caller/*=nullptr*/); -@Namespace("c10") public static native void ThrowEnforceNotMet( - String file, - int line, - String condition, - @ByVal CompileTimeEmptyString arg3); + + + + @Namespace("c10") public static native void ThrowEnforceFiniteNotMet( @Cast("const char*") BytePointer file, @@ -8328,6 +8347,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { */ @Namespace("c10") public static native void ShowLogInfoToStderr(); +@Namespace("c10") public static native void SetStackTraceFetcher(@ByVal StackTraceFetcher fetcher); + +/** + * Convenience function for non-lazy stack trace fetchers. The Backtrace + * overload should be preferred when stringifying the backtrace is expensive. + */ @Namespace("c10") public static native void SetStackTraceFetcher(@ByVal StringSupplier fetcher); // #define CAFFE_ENFORCE(condition, ...) @@ -8903,13 +8928,10 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #pragma once -// #include +// #include // #include -// #include -// #include +// #include // #include -// #include -// #include // #include // #include @@ -9137,6 +9159,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // The PtrTraits argument to the TensorAccessor/GenericPackedTensorAccessor // is used to enable the __restrict__ keyword/modifier for the data @@ -9204,7 +9227,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { * * @param size size in bytes */ -@Namespace("at") public static native @ByVal StorageImplPtr new_shm_fd_storage(@Cast("size_t") long size); +@Namespace("at") public static native @IntrusivePtr("c10::StorageImpl") @Cast({"", "c10::intrusive_ptr&"}) StorageImpl new_shm_fd_storage(@Cast("size_t") long size); /** * Copy src to dst @@ -12339,12 +12362,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include -// Targeting ../CompiledNodeArgs.java - - -// Targeting ../SwapSavedVariables.java - - // namespace torch::dynamo::autograd // A hook that's called on gradients @@ -12898,11 +12915,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #pragma once -// #include -// #include // #include -// #include -// #include // #include // #include @@ -12911,7 +12924,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { -@Namespace("caffe2") public static native void swap(@ByRef Blob lhs, @ByRef Blob rhs); +@Namespace("caffe2") public static native @NoException(true) void swap(@ByRef Blob lhs, @ByRef Blob rhs); @Namespace("caffe2") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer out, @Const @ByRef Blob v); @@ -13219,6 +13232,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // Targeting ../GenericDictIterator.java +// Targeting ../StringGenericListDictIterator.java + + +// Targeting ../TensorTensorDictIterator.java + + // Targeting ../GenericDict.java @@ -13226,6 +13245,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // Targeting ../StringGenericListDict.java +// Targeting ../TensorTensorDict.java + + // GenericDict is how IValue stores dicts. It is, however, not part of the // public API. Kernels should use Dicts with concrete Key, Value types instead // (maybe except for some internal prim ops). @@ -13248,7 +13270,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // const reference (const T&); taking T by non-const reference // will result in an error like: // -// error: no type named 'type' in 'class std::result_of' +// error: no type named 'type' in 'class std::invoke_result' // // No explicit template parameters are required. @@ -13281,10 +13303,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include -// #include // #include - // namespace jit - // namespace torch + // namespace torch::jit @@ -13619,7 +13639,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #pragma once // #include -// #include +// #include // Targeting ../RRefInterface.java @@ -13644,30 +13664,23 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // Forward declaration /** - * Flags defining the behavior of events. + * Note [Flags defining the behavior of events] * * PYTORCH_DEFAULT and BACKEND_DEFAULT are valid for all backends. The * BACKEND_DEFAULT is what a particular backend would select if no * flags were given. PYTORCH_DEFAULT is the PyTorch's framework default - * choice for events on that backend, which may not be the same. For example, - * when PyTorch creates a CUDA event it sets the flag - * CUDA_EVENT_DISABLING_TIMING by default to improve performance. + * choice for events on that backend, which may not be the same. * * The mapping of PYTORCH_DEFAULT and BACKEND_DEFAULT is done by each - * backend implementation. Backend-specific flags, like CUDA_EVENT_DEFAULT, - * should map one-to-one with actual event flags for those backends. + * backend implementation. */ @Namespace("c10") public enum EventFlag { + // Disable timing PYTORCH_DEFAULT(0), + // Enable timing BACKEND_DEFAULT(1), - // CUDA flags - CUDA_EVENT_DEFAULT(2), - CUDA_EVENT_DISABLE_TIMING(3), // PyTorch-default for CUDA - // HIP flags - HIP_EVENT_DEFAULT(4), - HIP_EVENT_DISABLE_TIMING(5), // PyTorch-default for HIP // FOR TESTING ONLY - INVALID(6); + INVALID(2); public final int value; private EventFlag(int v) { this.value = v; } @@ -13952,12 +13965,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { * to reset the stream and device on exit. If you are in a situation * where you *might* want to setup a stream guard, see OptionalStreamGuard. */ +// Targeting ../OptionalStreamGuard.java + -/** - * An OptionalStreamGuard is an RAII class that sets a device to some value on - * initialization, and resets the device to its original value on destruction. - * See OptionalDeviceGuard for more guidance on how to use this class. - */ /** * A MultiStreamGuard is an RAII class that sets the current streams of a set of @@ -14003,162 +14013,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // namespace c10 -// Parsed from c10/util/intrusive_ptr.h - -// #pragma once - -// #include -// #include -// #include -// #include -// #include -// #include - -@Namespace("c10::raw::weak_intrusive_ptr") public static native void incref(@Cast("c10::intrusive_ptr_target*") Pointer self); - - -// Targeting ../DontIncreaseRefcount.java - - - // namespace raw -@Namespace("c10::detail") @MemberGetter public static native @Cast("const uint32_t") int kImpracticallyHugeReferenceCount(); - // namespace detail - -/** - * intrusive_ptr is an alternative to shared_ptr that has better - * performance because it does the refcounting intrusively - * (i.e. in a member of the object itself). - * Your class T needs to inherit from intrusive_ptr_target to allow it to be - * used in an intrusive_ptr. Your class's constructor should not allow - *{@code this} to escape to other threads or create an intrusive_ptr from {@code this}. - */ - -// Note [Stack allocated intrusive_ptr_target safety] -// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -// A well known problem with std::enable_shared_from_this is that it -// allows you to create a std::shared_ptr from a stack allocated object, -// which is totally bogus because the object will die once you return -// from the stack. In intrusive_ptr, we can detect that this has occurred, -// because we set the refcount/weakcount of objects which inherit from -// intrusive_ptr_target to zero, *unless* we can prove that the object -// was dynamically allocated (e.g., via make_intrusive). -// -// Thus, whenever you transmute a T* into a intrusive_ptr, we check -// and make sure that the refcount isn't zero (or, a more subtle -// test for weak_intrusive_ptr, for which the refcount may validly -// be zero, but the weak refcount better not be zero), because that -// tells us if the object was allocated by us. If it wasn't, no -// intrusive_ptr for you! - -// NOLINTNEXTLINE(cppcoreguidelines-virtual-class-destructor) - -// Increment needs to be acquire-release to make use_count() and -// unique() reliable. -@Namespace("c10::detail") public static native @Cast("uint32_t") int atomic_refcount_increment(@Cast("std::atomic*") @ByRef IntPointer refcount); - -// weak_use_count() is only used for testing, so we don't need it to -// be reliable. Relaxed should be fine. -@Namespace("c10::detail") public static native @Cast("uint32_t") int atomic_weakcount_increment(@Cast("std::atomic*") @ByRef IntPointer weakcount); - -// Both decrements need to be acquire-release for correctness. See -// e.g. std::shared_ptr implementation. -@Namespace("c10::detail") public static native @Cast("uint32_t") int atomic_refcount_decrement(@Cast("std::atomic*") @ByRef IntPointer refcount); - -@Namespace("c10::detail") public static native @Cast("uint32_t") int atomic_weakcount_decrement(@Cast("std::atomic*") @ByRef IntPointer weakcount); - - -// Targeting ../QuantizerPtr.java - - -// Targeting ../GeneratorImplPtr.java - - -// Targeting ../TuplePtr.java - - -// Targeting ../FuturePtr.java - - -// Targeting ../ConstantStringPtr.java - - -// Targeting ../AwaitPtr.java - - -// Targeting ../ObjPtr.java - - -// Targeting ../PyObjectHolderPtr.java - - -// Targeting ../EnumHolderPtr.java - - -// Targeting ../RRefInterfacePtr.java - - -// Targeting ../TensorImplPtr.java - - -// Targeting ../StorageImplPtr.java - - -// Targeting ../SymNode.java - - -// Targeting ../BackendMetaRef.java - - -// Targeting ../TreeRef.java - - - -// To allow intrusive_ptr inside std::map or std::set, we need operator< -// Targeting ../WeakStorage.java - - - -// To allow weak_intrusive_ptr inside std::map or std::set, we need operator< - -// Alias for documentary purposes, to more easily distinguish -// weak raw intrusive pointers from intrusive pointers. - -// This namespace provides some methods for working with -// raw pointers that subclass intrusive_ptr_target. They are not provided -// as methods on intrusive_ptr_target, because ideally you would not need these -// methods at all (use smart pointers), but if you are dealing with legacy code -// that still needs to pass around raw pointers, you may find these quite -// useful. -// -// An important usage note: some functions are only valid if you have a -// strong raw pointer to the object, while others are only valid if you -// have a weak raw pointer to the object. ONLY call intrusive_ptr namespace -// functions on strong pointers, and weak_intrusive_ptr namespace functions -// on weak pointers. If you mix it up, you may get an assert failure. - -// WARNING: Unlike the reclaim() API, it is NOT valid to pass -// NullType::singleton to this function - -// WARNING: Unlike the reclaim() API, it is NOT valid to pass -// NullType::singleton to this function -@Namespace("c10::raw::intrusive_ptr") public static native void decref(@Cast("c10::intrusive_ptr_target*") Pointer self); - -@Namespace("c10::raw::intrusive_ptr") public static native @Cast("uint32_t") int use_count(@Cast("c10::intrusive_ptr_target*") Pointer self); - - // namespace intrusive_ptr - -// This gives the STRONG refcount of a WEAK pointer - - // namespace weak_intrusive_ptr - - // namespace raw - - // namespace c10 -// To allow intrusive_ptr and weak_intrusive_ptr inside std::unordered_map or -// std::unordered_set, we need std::hash - // namespace std - - // Parsed from ATen/core/ivalue_inl.h // #pragma once @@ -14264,13 +14118,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // Input is a list of Futures with the same target type. // Output is a Future to the List of completed Futures. -@Namespace("c10") public static native @ByVal FuturePtr collectAll( - @Const @ByRef FuturePtrList srcs); +@Namespace("c10") public static native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future collectAll( + @Const @ByRef FutureList srcs); // Input is a List of Futures with the same target type. // Output is a Future that will be updated with a seen value. -@Namespace("c10") public static native @ByVal FuturePtr collectAny( - @Const @ByRef FuturePtrList srcs); -// Targeting ../Object.java +@Namespace("c10") public static native @IntrusivePtr("c10::ivalue::Future") @Cast({"", "c10::intrusive_ptr&"}) Future collectAny( + @Const @ByRef FutureList srcs); +// Targeting ../Obj.java // Targeting ../PyObjectHolder.java @@ -14537,7 +14391,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include -// #include // #include // #include // #include @@ -14566,7 +14419,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // namespace ivalue -// This is an owning wrapper for a c10::optional> +// This is an owning wrapper for a std::optional> // that can be implicitly converted to a (non-owning) optional>. // Its purpose is to be used in generated code to keep the vector alive // either until the end of a statement (as a temporary), or as a saved arg @@ -14676,21 +14529,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { -@Namespace("c10::impl") public static native void swap(@ByRef(true) DoubleComplexElementReference lhs, @ByRef(true) DoubleComplexElementReference rhs); +@Namespace("c10::impl") public static native @NoException(true) void swap(@ByRef(true) DoubleComplexElementReference lhs, @ByRef(true) DoubleComplexElementReference rhs); -@Namespace("c10::impl") public static native void swap(@ByRef(true) BooleanElementReference lhs, @ByRef(true) BooleanElementReference rhs); +@Namespace("c10::impl") public static native @NoException(true) void swap(@ByRef(true) BooleanElementReference lhs, @ByRef(true) BooleanElementReference rhs); -@Namespace("c10::impl") public static native void swap(@ByRef(true) LongElementReference lhs, @ByRef(true) LongElementReference rhs); +@Namespace("c10::impl") public static native @NoException(true) void swap(@ByRef(true) LongElementReference lhs, @ByRef(true) LongElementReference rhs); -@Namespace("c10::impl") public static native void swap(@ByRef(true) DoubleElementReference lhs, @ByRef(true) DoubleElementReference rhs); +@Namespace("c10::impl") public static native @NoException(true) void swap(@ByRef(true) DoubleElementReference lhs, @ByRef(true) DoubleElementReference rhs); -@Namespace("c10::impl") public static native void swap(@ByRef(true) TensorOptionalElementReference lhs, @ByRef(true) TensorOptionalElementReference rhs); +@Namespace("c10::impl") public static native @NoException(true) void swap(@ByRef(true) TensorOptionalElementReference lhs, @ByRef(true) TensorOptionalElementReference rhs); -@Namespace("c10::impl") public static native void swap(@ByRef(true) TensorElementReference lhs, @ByRef(true) TensorElementReference rhs); +@Namespace("c10::impl") public static native @NoException(true) void swap(@ByRef(true) TensorElementReference lhs, @ByRef(true) TensorElementReference rhs); -@Namespace("c10::impl") public static native void swap(@ByRef(true) FuturePtrElementReference lhs, @ByRef(true) FuturePtrElementReference rhs); +@Namespace("c10::impl") public static native @NoException(true) void swap(@ByRef(true) FutureElementReference lhs, @ByRef(true) FutureElementReference rhs); -@Namespace("c10::impl") public static native void swap(@ByRef(true) GenericElementReference lhs, @ByRef(true) GenericElementReference rhs); +@Namespace("c10::impl") public static native @NoException(true) void swap(@ByRef(true) GenericElementReference lhs, @ByRef(true) GenericElementReference rhs); @@ -14779,7 +14632,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { -// There is no to() overload for c10::optional. +// There is no to() overload for std::optional. // Targeting ../DoubleComplexElementReference.java @@ -14798,7 +14651,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // Targeting ../TensorElementReference.java -// Targeting ../FuturePtrElementReference.java +// Targeting ../FutureElementReference.java // Targeting ../GenericElementReference.java @@ -14822,7 +14675,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // Targeting ../TensorListIterator.java -// Targeting ../FuturePtrListIterator.java +// Targeting ../FutureListIterator.java // Targeting ../GenericListIterator.java @@ -14847,7 +14700,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // Targeting ../TensorList.java -// Targeting ../FuturePtrList.java +// Targeting ../FutureList.java // Targeting ../GenericList.java @@ -14906,8 +14759,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { * this method ourselves. */ - // namespace detail - // namespace c10 + // namespace c10::detail // [Note: ITensorListRef] // [Note: IOptTensorListRef] @@ -15341,7 +15193,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // dimension behavior and dimension size checking). We maintain this behavior // for backwards compatibility, but only for this specific size (i.e. other // empty sizes are not skipped). - @Namespace("at") public static native @Cast("int64_t") long legacy_cat_wrap_dim( @Cast("int64_t") long dim, @Cast("std::vector*") @StdVector LongVector tensor_sizes); @@ -15659,8 +15510,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #pragma once -// #include // #include +// #include // A little explanation about why this file exists at all. We have // a few methods on Tensor class which require access to reified access to @@ -15795,7 +15646,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("torch::autograd") public static native void backward( @Const @ByRef TensorVector tensors, @Const @ByRef(nullValue = "torch::autograd::variable_list{}") TensorVector grad_tensors, - @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional retain_graph, + @ByVal(nullValue = "std::optional(c10::nullopt)") BoolOptional retain_graph, @Cast("bool") boolean create_graph/*=false*/, @Const @ByRef(nullValue = "torch::autograd::variable_list{}") TensorVector inputs); @Namespace("torch::autograd") public static native void backward( @@ -15831,7 +15682,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Const @ByRef TensorVector outputs, @Const @ByRef TensorVector inputs, @Const @ByRef(nullValue = "torch::autograd::variable_list{}") TensorVector grad_outputs, - @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional retain_graph, + @ByVal(nullValue = "std::optional(c10::nullopt)") BoolOptional retain_graph, @Cast("bool") boolean create_graph/*=false*/, @Cast("bool") boolean allow_unused/*=false*/); @Namespace("torch::autograd") public static native @ByVal TensorVector grad( @@ -16390,8 +16241,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // pack takes the return values of aten functions pushes them onto the stack - // namespace jit - // namespace torch + // namespace torch::jit // Parsed from ATen/core/boxing/impl/boxing.h @@ -16906,6 +16756,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // Targeting ../SafePyObject.java +// Targeting ../PyObject_TorchDispatchMode.java + + // Targeting ../SafePyHandle.java @@ -16933,9 +16786,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #pragma once -// #include -// #include -// #include // #include // #include @@ -17153,9 +17003,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByVal @Cast("at::StepCallbacks*") Pointer getStepCallbacks(RecordScope scope); @Namespace("at") public static native @ByVal @Cast("at::StepCallbacks*") Pointer getStepCallbacks(@Cast("at::RecordScope") byte scope); -@Namespace("at") public static native @ByVal @Cast("c10::optional*") Pointer getStepCallbacksUnlessEmpty( +@Namespace("at") public static native @ByVal @Cast("std::optional*") Pointer getStepCallbacksUnlessEmpty( RecordScope scope); -@Namespace("at") public static native @ByVal @Cast("c10::optional*") Pointer getStepCallbacksUnlessEmpty( +@Namespace("at") public static native @ByVal @Cast("std::optional*") Pointer getStepCallbacksUnlessEmpty( @Cast("at::RecordScope") byte scope); // namespace detail @@ -17574,13 +17424,15 @@ public class torch extends org.bytedeco.pytorch.presets.torch { -@Namespace("torch::jit") public static native void preoptimizeGraph(@SharedPtr("torch::jit::Graph") @ByRef Graph graph, @Cast("bool") boolean disable_autocast/*=false*/); -@Namespace("torch::jit") public static native void preoptimizeGraph(@SharedPtr("torch::jit::Graph") @ByRef Graph graph); +@Namespace("torch::jit") public static native void preoptimizeGraph( + @SharedPtr("torch::jit::Graph") @ByRef Graph graph, + @Cast("bool") boolean disable_autocast/*=false*/); +@Namespace("torch::jit") public static native void preoptimizeGraph( + @SharedPtr("torch::jit::Graph") @ByRef Graph graph); // Targeting ../Function.java - // namespace jit - // namespace torch + // namespace torch::jit // Parsed from ATen/core/class_type.h @@ -17592,8 +17444,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include - // namespace jit - // namespace torch + // namespace torch::jit // This enumerator represents the 'kind' of an attribute - a buffer, a parameter, or neither. // This state is mutually exclusive. Buffers and Parameters can only appear on modules. @@ -17775,7 +17626,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { * \ingroup torch-schema-overloads */ /// +@Namespace("torch") public static native @ByVal FunctionSchema schema(@Cast("const char*") BytePointer str, AliasAnalysisKind k, @Cast("bool") boolean allow_typevars/*=false*/); @Namespace("torch") public static native @ByVal FunctionSchema schema(@Cast("const char*") BytePointer str, AliasAnalysisKind k); +@Namespace("torch") public static native @ByVal FunctionSchema schema(String str, @Cast("c10::AliasAnalysisKind") byte k, @Cast("bool") boolean allow_typevars/*=false*/); @Namespace("torch") public static native @ByVal FunctionSchema schema(String str, @Cast("c10::AliasAnalysisKind") byte k); /** Function schemas can be directly constructed from string literals. @@ -17784,7 +17637,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { /// /// +@Namespace("torch") public static native @ByVal FunctionSchema schema(@Cast("const char*") BytePointer s, @Cast("bool") boolean allow_typevars/*=false*/); @Namespace("torch") public static native @ByVal FunctionSchema schema(@Cast("const char*") BytePointer s); +@Namespace("torch") public static native @ByVal FunctionSchema schema(String s, @Cast("bool") boolean allow_typevars/*=false*/); @Namespace("torch") public static native @ByVal FunctionSchema schema(String s); /** \private @@ -18391,6 +18246,32 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // namespace torch::autograd +// Parsed from ATen/BlasBackend.h + +// #pragma once + +// #include + +// #include +// #include + +@Namespace("at") public enum BlasBackend { Cublas((byte)(0)), Cublaslt((byte)(1)); + + public final byte value; + private BlasBackend(byte v) { this.value = v; } + private BlasBackend(BlasBackend e) { this.value = e.value; } + public BlasBackend intern() { for (BlasBackend e : values()) if (e.value == value) return e; return this; } + @Override public String toString() { return intern().name(); } +} + +@Namespace("at") public static native @StdString BytePointer BlasBackendToString(BlasBackend backend); +@Namespace("at") public static native @StdString String BlasBackendToString(@Cast("at::BlasBackend") byte backend); + +@Namespace("at") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer stream, BlasBackend backend); + + // namespace at + + // Parsed from ATen/core/MT19937RNGEngine.h // #pragma once @@ -18450,6 +18331,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #pragma once // #include +// #include // Targeting ../AcceleratorHooksInterface.java @@ -18461,8 +18343,10 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #pragma once +// #include // #include +// #include // #include // #include @@ -18477,6 +18361,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #define REGISTER_MTIA_HOOKS(clsname) // C10_REGISTER_CLASS(MTIAHooksRegistry, clsname, clsname) @Namespace("at::detail") public static native @Const @ByRef MTIAHooksInterface getMTIAHooks(); +@Namespace("at::detail") public static native @Cast("bool") boolean isMTIAHooksBuilt(); // namespace detail // namespace at @@ -18505,8 +18390,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // Ensures that only one accelerator is available (at // compile time if possible) and return it. // When checked is true, the returned optional always has a value. -@Namespace("at") public static native @Optional @Cast("c10::DeviceType*") BytePointer getAccelerator(@Cast("bool") boolean checked/*=false*/); -@Namespace("at") public static native @Optional @Cast("c10::DeviceType*") BytePointer getAccelerator(); +@Namespace("at") public static native @ByVal DeviceTypeOptional getAccelerator(@Cast("bool") boolean checked/*=false*/); +@Namespace("at") public static native @ByVal DeviceTypeOptional getAccelerator(); // namespace at @@ -18616,7 +18501,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include -// #include // #include @@ -18683,23 +18567,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // namespace at -// Parsed from ATen/detail/ORTHooksInterface.h +// Parsed from ATen/detail/MAIAHooksInterface.h // #pragma once // #include // #include - -@MemberGetter public static native @Cast("const char*") BytePointer ORT_HELP(); -// Targeting ../ORTHooksInterface.java +// Targeting ../MAIAHooksInterface.java -// Targeting ../ORTHooksArgs.java +// Targeting ../MAIAHooksArgs.java -// #define REGISTER_ORT_HOOKS(clsname) -// C10_REGISTER_CLASS(ORTHooksRegistry, clsname, clsname) -@Namespace("at::detail") public static native @Const @ByRef ORTHooksInterface getORTHooks(); +// #define REGISTER_MAIA_HOOKS(clsname) +// C10_REGISTER_CLASS(MAIAHooksRegistry, clsname, clsname) +@Namespace("at::detail") public static native @Const @ByRef MAIAHooksInterface getMAIAHooks(); // namespace detail // namespace at @@ -18745,10 +18627,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include -// #include -// #include -// #include - @Namespace("at") @MemberGetter public static native @Cast("const char*") BytePointer XPU_HELP(); // Targeting ../XPUHooksInterface.java @@ -18828,8 +18706,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // // NB: // Issues a warning if the value of the environment variable is not 0 or 1. -@Namespace("c10::utils") public static native @Cast("bool*") @Optional BoolPointer check_env(@Cast("const char*") BytePointer name); -@Namespace("c10::utils") public static native @Cast("bool*") @Optional boolean[] check_env(String name); +@Namespace("c10::utils") public static native @ByVal BoolOptional check_env(@Cast("const char*") BytePointer name); +@Namespace("c10::utils") public static native @ByVal BoolOptional check_env(String name); // namespace c10::utils @@ -18837,6 +18715,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #pragma once +// #include // #include // #include // #include @@ -18848,9 +18727,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include -// #include // #include // #include // #include @@ -18893,7 +18772,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @Cast("bool") boolean hasMPS(); -@Namespace("at") public static native @Cast("bool") boolean hasORT(); +@Namespace("at") public static native @Cast("bool") boolean hasMAIA(); @Namespace("at") public static native @Cast("bool") boolean hasXPU(); @@ -19029,6 +18908,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { ScalarType scalar_type, @ByVal MemoryFormatOptional memory_format_opt); +@Namespace("at::detail") public static native @ByVal TensorBase empty_generic_symint( + @ByVal SymIntArrayRef size, + Allocator allocator, + @ByVal DispatchKeySet ks, + ScalarType scalar_type, + @ByVal MemoryFormatOptional memory_format_opt); + @Namespace("at::detail") public static native @ByVal TensorBase empty_strided_generic( @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @@ -19053,7 +18939,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @ByVal LongArrayRef size, ScalarType dtype, @Cast("bool") boolean pin_memory/*=false*/, - @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format_opt); + @ByVal(nullValue = "std::optional(c10::nullopt)") MemoryFormatOptional memory_format_opt); @Namespace("at::detail") public static native @ByVal TensorBase empty_cpu( @ByVal LongArrayRef size, ScalarType dtype); @@ -19061,7 +18947,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, ScalarType dtype, @Cast("bool") boolean pin_memory/*=false*/, - @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format_opt); + @ByVal(nullValue = "std::optional(c10::nullopt)") MemoryFormatOptional memory_format_opt); @Namespace("at::detail") public static native @ByVal TensorBase empty_cpu( @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, ScalarType dtype); @@ -19130,14 +19016,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at::detail") public static native @ByVal TensorBase empty_meta( @ByVal LongArrayRef size, ScalarType dtype, - @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format_opt); + @ByVal(nullValue = "std::optional(c10::nullopt)") MemoryFormatOptional memory_format_opt); @Namespace("at::detail") public static native @ByVal TensorBase empty_meta( @ByVal LongArrayRef size, ScalarType dtype); @Namespace("at::detail") public static native @ByVal TensorBase empty_meta( @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, ScalarType dtype, - @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format_opt); + @ByVal(nullValue = "std::optional(c10::nullopt)") MemoryFormatOptional memory_format_opt); @Namespace("at::detail") public static native @ByVal TensorBase empty_meta( @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, ScalarType dtype); @@ -19735,9 +19621,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // These constants control the reduction behavior of loss functions. // Ideally, this would be a scoped enum, but jit doesn't support that @Namespace("at::Reduction") public enum Reduction { - None(0), // Do not reduce - Mean(1), // (Possibly weighted) mean of losses - Sum(2), // Sum losses + None(0), // Do not reduce + Mean(1), // (Possibly weighted) mean of losses + Sum(2), // Sum losses END(3); public final int value; @@ -19746,8 +19632,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { public Reduction intern() { for (Reduction e : values()) if (e.value == value) return e; return this; } @Override public String toString() { return intern().name(); } } - // namespace Reduction - // namespace at + // namespace at::Reduction // Parsed from ATen/ops/abs.h @@ -21031,11 +20916,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::aminmax(Tensor self, *, int? dim=None, bool keepdim=False) -> (Tensor min, Tensor max) -@Namespace("at") public static native @ByVal T_TensorTensor_T aminmax(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T aminmax(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T aminmax(@Const @ByRef Tensor self); // aten::aminmax.out(Tensor self, *, int? dim=None, bool keepdim=False, Tensor(a!) min, Tensor(b!) max) -> (Tensor(a!) min, Tensor(b!) max) -@Namespace("at") public static native @ByVal T_TensorTensor_T aminmax_out(@ByRef Tensor min, @ByRef Tensor max, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T aminmax_out(@ByRef Tensor min, @ByRef Tensor max, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T aminmax_out(@ByRef Tensor min, @ByRef Tensor max, @Const @ByRef Tensor self); // aten::aminmax.out(Tensor self, *, int? dim=None, bool keepdim=False, Tensor(a!) min, Tensor(b!) max) -> (Tensor(a!) min, Tensor(b!) max) @Namespace("at") public static native @ByVal T_TensorTensor_T aminmax_outf(@Const @ByRef Tensor self, @ByVal LongOptional dim, @Cast("bool") boolean keepdim, @ByRef Tensor min, @ByRef Tensor max); @@ -21524,11 +21409,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::argmax(Tensor self, int? dim=None, bool keepdim=False) -> Tensor -@Namespace("at") public static native @ByVal Tensor argmax(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal Tensor argmax(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal Tensor argmax(@Const @ByRef Tensor self); // aten::argmax.out(Tensor self, int? dim=None, bool keepdim=False, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor argmax_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByRef Tensor argmax_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByRef Tensor argmax_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::argmax.out(Tensor self, int? dim=None, bool keepdim=False, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor argmax_outf(@Const @ByRef Tensor self, @ByVal LongOptional dim, @Cast("bool") boolean keepdim, @ByRef Tensor out); @@ -21561,11 +21446,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::argmin(Tensor self, int? dim=None, bool keepdim=False) -> Tensor -@Namespace("at") public static native @ByVal Tensor argmin(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal Tensor argmin(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal Tensor argmin(@Const @ByRef Tensor self); // aten::argmin.out(Tensor self, int? dim=None, bool keepdim=False, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor argmin_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByRef Tensor argmin_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByRef Tensor argmin_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::argmin.out(Tensor self, int? dim=None, bool keepdim=False, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor argmin_outf(@Const @ByRef Tensor self, @ByVal LongOptional dim, @Cast("bool") boolean keepdim, @ByRef Tensor out); @@ -21673,26 +21558,26 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::as_strided(Tensor(a) self, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor(a) -@Namespace("at") public static native @ByVal Tensor as_strided(@Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @ByVal Tensor as_strided(@Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @ByVal Tensor as_strided(@Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride); -@Namespace("at") public static native @ByVal Tensor as_strided(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @ByVal Tensor as_strided(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @ByVal Tensor as_strided(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... stride); // aten::as_strided(Tensor(a) self, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor(a) -@Namespace("at") public static native @ByVal Tensor as_strided_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional storage_offset); +@Namespace("at") public static native @ByVal Tensor as_strided_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional storage_offset); @Namespace("at") public static native @ByVal Tensor as_strided_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride); // aten::as_strided_(Tensor(a!) self, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor as_strided_(@Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @Const @ByRef Tensor as_strided_(@Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @Const @ByRef Tensor as_strided_(@Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride); -@Namespace("at") public static native @Const @ByRef Tensor as_strided_(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @Const @ByRef Tensor as_strided_(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @Const @ByRef Tensor as_strided_(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... stride); // aten::as_strided_(Tensor(a!) self, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor as_strided__symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional storage_offset); +@Namespace("at") public static native @Const @ByRef Tensor as_strided__symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional storage_offset); @Namespace("at") public static native @Const @ByRef Tensor as_strided__symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride); @@ -21724,21 +21609,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::as_strided_copy(Tensor self, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor as_strided_copy(@Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @ByVal Tensor as_strided_copy(@Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @ByVal Tensor as_strided_copy(@Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride); -@Namespace("at") public static native @ByVal Tensor as_strided_copy(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @ByVal Tensor as_strided_copy(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @ByVal Tensor as_strided_copy(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... stride); // aten::as_strided_copy(Tensor self, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor as_strided_copy_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional storage_offset); +@Namespace("at") public static native @ByVal Tensor as_strided_copy_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional storage_offset); @Namespace("at") public static native @ByVal Tensor as_strided_copy_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride); // aten::as_strided_copy.out(Tensor self, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor as_strided_copy_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @ByRef Tensor as_strided_copy_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @ByRef Tensor as_strided_copy_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal LongArrayRef stride); -@Namespace("at") public static native @ByRef Tensor as_strided_copy_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @ByRef Tensor as_strided_copy_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @ByRef Tensor as_strided_copy_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... stride); @@ -21748,7 +21633,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::as_strided_copy.out(Tensor self, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor as_strided_copy_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional storage_offset); +@Namespace("at") public static native @ByRef Tensor as_strided_copy_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional storage_offset); @Namespace("at") public static native @ByRef Tensor as_strided_copy_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride); @@ -21784,21 +21669,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::as_strided_scatter(Tensor self, Tensor src, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal LongArrayRef size, @ByVal LongArrayRef stride); -@Namespace("at") public static native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @ByVal Tensor as_strided_scatter(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... stride); // aten::as_strided_scatter(Tensor self, Tensor src, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor as_strided_scatter_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional storage_offset); +@Namespace("at") public static native @ByVal Tensor as_strided_scatter_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional storage_offset); @Namespace("at") public static native @ByVal Tensor as_strided_scatter_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride); // aten::as_strided_scatter.out(Tensor self, Tensor src, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor as_strided_scatter_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @ByRef Tensor as_strided_scatter_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @ByRef Tensor as_strided_scatter_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal LongArrayRef size, @ByVal LongArrayRef stride); -@Namespace("at") public static native @ByRef Tensor as_strided_scatter_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional storage_offset); +@Namespace("at") public static native @ByRef Tensor as_strided_scatter_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional storage_offset); @Namespace("at") public static native @ByRef Tensor as_strided_scatter_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... stride); @@ -21808,7 +21693,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::as_strided_scatter.out(Tensor self, Tensor src, SymInt[] size, SymInt[] stride, SymInt? storage_offset=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor as_strided_scatter_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional storage_offset); +@Namespace("at") public static native @ByRef Tensor as_strided_scatter_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional storage_offset); @Namespace("at") public static native @ByRef Tensor as_strided_scatter_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @ByVal SymIntArrayRef size, @ByVal SymIntArrayRef stride); @@ -22166,18 +22051,18 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::avg_pool2d.out(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=0, bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor avg_pool2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional divisor_override); +@Namespace("at") public static native @ByRef Tensor avg_pool2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional divisor_override); @Namespace("at") public static native @ByRef Tensor avg_pool2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByRef Tensor avg_pool2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional divisor_override); +@Namespace("at") public static native @ByRef Tensor avg_pool2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional divisor_override); @Namespace("at") public static native @ByRef Tensor avg_pool2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); // aten::avg_pool2d.out(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=0, bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor avg_pool2d_outf(@Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size, @ByVal LongArrayRef stride, @ByVal LongArrayRef padding, @Cast("bool") boolean ceil_mode, @Cast("bool") boolean count_include_pad, @ByVal LongOptional divisor_override, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor avg_pool2d_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @Cast("bool") boolean ceil_mode, @Cast("bool") boolean count_include_pad, @ByVal LongOptional divisor_override, @ByRef Tensor out); // aten::avg_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=0, bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor avg_pool2d(@Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional divisor_override); +@Namespace("at") public static native @ByVal Tensor avg_pool2d(@Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional divisor_override); @Namespace("at") public static native @ByVal Tensor avg_pool2d(@Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByVal Tensor avg_pool2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional divisor_override); +@Namespace("at") public static native @ByVal Tensor avg_pool2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional divisor_override); @Namespace("at") public static native @ByVal Tensor avg_pool2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); @@ -22246,18 +22131,18 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::avg_pool3d.out(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=0, bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor avg_pool3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional divisor_override); +@Namespace("at") public static native @ByRef Tensor avg_pool3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional divisor_override); @Namespace("at") public static native @ByRef Tensor avg_pool3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByRef Tensor avg_pool3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional divisor_override); +@Namespace("at") public static native @ByRef Tensor avg_pool3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional divisor_override); @Namespace("at") public static native @ByRef Tensor avg_pool3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); // aten::avg_pool3d.out(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=0, bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor avg_pool3d_outf(@Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size, @ByVal LongArrayRef stride, @ByVal LongArrayRef padding, @Cast("bool") boolean ceil_mode, @Cast("bool") boolean count_include_pad, @ByVal LongOptional divisor_override, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor avg_pool3d_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @Cast("bool") boolean ceil_mode, @Cast("bool") boolean count_include_pad, @ByVal LongOptional divisor_override, @ByRef Tensor out); // aten::avg_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=0, bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor avg_pool3d(@Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional divisor_override); +@Namespace("at") public static native @ByVal Tensor avg_pool3d(@Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional divisor_override); @Namespace("at") public static native @ByVal Tensor avg_pool3d(@Const @ByRef Tensor self, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByVal Tensor avg_pool3d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional divisor_override); +@Namespace("at") public static native @ByVal Tensor avg_pool3d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @ByVal(nullValue = "at::IntArrayRef{}") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @Cast("bool") boolean ceil_mode/*=false*/, @Cast("bool") boolean count_include_pad/*=true*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional divisor_override); @Namespace("at") public static native @ByVal Tensor avg_pool3d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); @@ -22417,6 +22302,36 @@ public class torch extends org.bytedeco.pytorch.presets.torch { +// Parsed from ATen/ops/batch_norm_backward.h + +// #pragma once + +// @generated by torchgen/gen.py from Function.h + +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include + + + +// #include + + +// aten::batch_norm_backward(Tensor grad_out, Tensor input, Tensor weight, Tensor? running_mean, Tensor? running_var, Tensor? save_mean, Tensor? save_var, bool update, float eps, bool[3] output_mask, Tensor reserve) -> (Tensor, Tensor, Tensor) +@Namespace("at") public static native @ByVal T_TensorTensorTensor_T batch_norm_backward(@Const @ByRef Tensor grad_out, @Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef TensorOptional running_mean, @Const @ByRef TensorOptional running_var, @Const @ByRef TensorOptional save_mean, @Const @ByRef TensorOptional save_var, @Cast("bool") boolean update, double eps, @ByVal @Cast("std::array*") BoolPointer output_mask, @Const @ByRef Tensor reserve); + + + + // Parsed from ATen/ops/batch_norm_backward_elemt.h // #pragma once @@ -22687,29 +22602,29 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::bernoulli(Tensor self, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor bernoulli(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor bernoulli(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor bernoulli(@Const @ByRef Tensor self); // aten::bernoulli.out(Tensor self, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor bernoulli_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor bernoulli_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); // aten::bernoulli.out(Tensor self, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor bernoulli_outf(@Const @ByRef Tensor self, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::bernoulli.p(Tensor self, float p, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor bernoulli(@Const @ByRef Tensor self, double p, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor bernoulli(@Const @ByRef Tensor self, double p, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor bernoulli(@Const @ByRef Tensor self, double p); // aten::bernoulli.Tensor_out(Tensor self, Tensor p, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor bernoulli_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor p, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor bernoulli_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor p, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); // aten::bernoulli.Tensor_out(Tensor self, Tensor p, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor bernoulli_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor p, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::bernoulli.Tensor(Tensor self, Tensor p, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor bernoulli(@Const @ByRef Tensor self, @Const @ByRef Tensor p, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor bernoulli(@Const @ByRef Tensor self, @Const @ByRef Tensor p, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor bernoulli(@Const @ByRef Tensor self, @Const @ByRef Tensor p); // aten::bernoulli.float_out(Tensor self, float p=0.5, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor bernoulli_out(@ByRef Tensor out, @Const @ByRef Tensor self, double p/*=0.5*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor bernoulli_out(@ByRef Tensor out, @Const @ByRef Tensor self, double p/*=0.5*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); // aten::bernoulli.float_out(Tensor self, float p=0.5, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor bernoulli_outf(@Const @ByRef Tensor self, double p, @ByVal GeneratorOptional generator, @ByRef Tensor out); @@ -22741,7 +22656,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::bilinear(Tensor input1, Tensor input2, Tensor weight, Tensor? bias=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor bilinear(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias); +@Namespace("at") public static native @ByVal Tensor bilinear(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias); @Namespace("at") public static native @ByVal Tensor bilinear(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor weight); @@ -22772,11 +22687,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::binary_cross_entropy(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean) -> Tensor -@Namespace("at") public static native @ByVal Tensor binary_cross_entropy(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); +@Namespace("at") public static native @ByVal Tensor binary_cross_entropy(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); @Namespace("at") public static native @ByVal Tensor binary_cross_entropy(@Const @ByRef Tensor self, @Const @ByRef Tensor target); // aten::binary_cross_entropy.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor binary_cross_entropy_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); +@Namespace("at") public static native @ByRef Tensor binary_cross_entropy_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); @Namespace("at") public static native @ByRef Tensor binary_cross_entropy_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target); // aten::binary_cross_entropy.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor binary_cross_entropy_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef TensorOptional weight, @Cast("int64_t") long reduction, @ByRef Tensor out); @@ -22809,11 +22724,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::binary_cross_entropy_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean) -> Tensor -@Namespace("at") public static native @ByVal Tensor binary_cross_entropy_backward(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); +@Namespace("at") public static native @ByVal Tensor binary_cross_entropy_backward(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); @Namespace("at") public static native @ByVal Tensor binary_cross_entropy_backward(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target); // aten::binary_cross_entropy_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor binary_cross_entropy_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); +@Namespace("at") public static native @ByRef Tensor binary_cross_entropy_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); @Namespace("at") public static native @ByRef Tensor binary_cross_entropy_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target); // aten::binary_cross_entropy_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, *, Tensor(a!) grad_input) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor binary_cross_entropy_backward_outf(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef TensorOptional weight, @Cast("int64_t") long reduction, @ByRef Tensor grad_input); @@ -22846,11 +22761,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::binary_cross_entropy_with_logits(Tensor self, Tensor target, Tensor? weight=None, Tensor? pos_weight=None, int reduction=Mean) -> Tensor -@Namespace("at") public static native @ByVal Tensor binary_cross_entropy_with_logits(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional pos_weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); +@Namespace("at") public static native @ByVal Tensor binary_cross_entropy_with_logits(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional pos_weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); @Namespace("at") public static native @ByVal Tensor binary_cross_entropy_with_logits(@Const @ByRef Tensor self, @Const @ByRef Tensor target); // aten::binary_cross_entropy_with_logits.out(Tensor self, Tensor target, Tensor? weight=None, Tensor? pos_weight=None, int reduction=Mean, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor binary_cross_entropy_with_logits_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional pos_weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); +@Namespace("at") public static native @ByRef Tensor binary_cross_entropy_with_logits_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional pos_weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); @Namespace("at") public static native @ByRef Tensor binary_cross_entropy_with_logits_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target); // aten::binary_cross_entropy_with_logits.out(Tensor self, Tensor target, Tensor? weight=None, Tensor? pos_weight=None, int reduction=Mean, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor binary_cross_entropy_with_logits_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef TensorOptional weight, @Const @ByRef TensorOptional pos_weight, @Cast("int64_t") long reduction, @ByRef Tensor out); @@ -22883,11 +22798,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::bincount(Tensor self, Tensor? weights=None, int minlength=0) -> Tensor -@Namespace("at") public static native @ByVal Tensor bincount(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weights, @Cast("int64_t") long minlength/*=0*/); +@Namespace("at") public static native @ByVal Tensor bincount(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weights, @Cast("int64_t") long minlength/*=0*/); @Namespace("at") public static native @ByVal Tensor bincount(@Const @ByRef Tensor self); // aten::bincount.out(Tensor self, Tensor? weights=None, int minlength=0, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor bincount_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weights, @Cast("int64_t") long minlength/*=0*/); +@Namespace("at") public static native @ByRef Tensor bincount_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weights, @Cast("int64_t") long minlength/*=0*/); @Namespace("at") public static native @ByRef Tensor bincount_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::bincount.out(Tensor self, Tensor? weights=None, int minlength=0, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor bincount_outf(@Const @ByRef Tensor self, @Const @ByRef TensorOptional weights, @Cast("int64_t") long minlength, @ByRef Tensor out); @@ -22920,11 +22835,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::binomial(Tensor count, Tensor prob, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor binomial(@Const @ByRef Tensor count, @Const @ByRef Tensor prob, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor binomial(@Const @ByRef Tensor count, @Const @ByRef Tensor prob, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor binomial(@Const @ByRef Tensor count, @Const @ByRef Tensor prob); // aten::binomial.out(Tensor count, Tensor prob, Generator? generator=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor binomial_out(@ByRef Tensor out, @Const @ByRef Tensor count, @Const @ByRef Tensor prob, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor binomial_out(@ByRef Tensor out, @Const @ByRef Tensor count, @Const @ByRef Tensor prob, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor binomial_out(@ByRef Tensor out, @Const @ByRef Tensor count, @Const @ByRef Tensor prob); // aten::binomial.out(Tensor count, Tensor prob, Generator? generator=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor binomial_outf(@Const @ByRef Tensor count, @Const @ByRef Tensor prob, @ByVal GeneratorOptional generator, @ByRef Tensor out); @@ -23482,8 +23397,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include -// aten::can_cast(ScalarType from, ScalarType to) -> bool -@Namespace("at") public static native @Cast("bool") boolean can_cast(ScalarType from, ScalarType to); +// aten::can_cast(ScalarType from_, ScalarType to) -> bool +@Namespace("at") public static native @Cast("bool") boolean can_cast(ScalarType from_, ScalarType to); @@ -23597,13 +23512,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::cauchy.out(Tensor self, float median=0, float sigma=1, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor cauchy_out(@ByRef Tensor out, @Const @ByRef Tensor self, double median/*=0*/, double sigma/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor cauchy_out(@ByRef Tensor out, @Const @ByRef Tensor self, double median/*=0*/, double sigma/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor cauchy_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::cauchy.out(Tensor self, float median=0, float sigma=1, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor cauchy_outf(@Const @ByRef Tensor self, double median, double sigma, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::cauchy(Tensor self, float median=0, float sigma=1, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor cauchy(@Const @ByRef Tensor self, double median/*=0*/, double sigma/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor cauchy(@Const @ByRef Tensor self, double median/*=0*/, double sigma/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor cauchy(@Const @ByRef Tensor self); @@ -23697,7 +23612,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::cdist(Tensor x1, Tensor x2, float p=2, int? compute_mode=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor cdist(@Const @ByRef Tensor x1, @Const @ByRef Tensor x2, double p/*=2*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional compute_mode); +@Namespace("at") public static native @ByVal Tensor cdist(@Const @ByRef Tensor x1, @Const @ByRef Tensor x2, double p/*=2*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional compute_mode); @Namespace("at") public static native @ByVal Tensor cdist(@Const @ByRef Tensor x1, @Const @ByRef Tensor x2); @@ -24096,29 +24011,29 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::clamp(Tensor self, Scalar? min=None, Scalar? max=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor clamp(@Const @ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional max); +@Namespace("at") public static native @ByVal Tensor clamp(@Const @ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional max); @Namespace("at") public static native @ByVal Tensor clamp(@Const @ByRef Tensor self, @Const @ByRef ScalarOptional min); // aten::clamp.Tensor(Tensor self, Tensor? min=None, Tensor? max=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor clamp(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional min, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional max); +@Namespace("at") public static native @ByVal Tensor clamp(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional{}") TensorOptional min, @Const @ByRef(nullValue = "std::optional{}") TensorOptional max); @Namespace("at") public static native @ByVal Tensor clamp(@Const @ByRef Tensor self); // aten::clamp_(Tensor(a!) self, Scalar? min=None, Scalar? max=None) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor clamp_(@ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional max); +@Namespace("at") public static native @ByRef Tensor clamp_(@ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional max); @Namespace("at") public static native @ByRef Tensor clamp_(@ByRef Tensor self, @Const @ByRef ScalarOptional min); // aten::clamp_.Tensor(Tensor(a!) self, Tensor? min=None, Tensor? max=None) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor clamp_(@ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional min, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional max); +@Namespace("at") public static native @ByRef Tensor clamp_(@ByRef Tensor self, @Const @ByRef(nullValue = "std::optional{}") TensorOptional min, @Const @ByRef(nullValue = "std::optional{}") TensorOptional max); @Namespace("at") public static native @ByRef Tensor clamp_(@ByRef Tensor self); // aten::clamp.out(Tensor self, Scalar? min=None, Scalar? max=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor clamp_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional max); +@Namespace("at") public static native @ByRef Tensor clamp_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional max); @Namespace("at") public static native @ByRef Tensor clamp_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef ScalarOptional min); // aten::clamp.out(Tensor self, Scalar? min=None, Scalar? max=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor clamp_outf(@Const @ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef ScalarOptional max, @ByRef Tensor out); // aten::clamp.Tensor_out(Tensor self, Tensor? min=None, Tensor? max=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor clamp_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional min, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional max); +@Namespace("at") public static native @ByRef Tensor clamp_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional{}") TensorOptional min, @Const @ByRef(nullValue = "std::optional{}") TensorOptional max); @Namespace("at") public static native @ByRef Tensor clamp_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::clamp.Tensor_out(Tensor self, Tensor? min=None, Tensor? max=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor clamp_outf(@Const @ByRef Tensor self, @Const @ByRef TensorOptional min, @Const @ByRef TensorOptional max, @ByRef Tensor out); @@ -24249,29 +24164,29 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::clip(Tensor self, Scalar? min=None, Scalar? max=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor clip(@Const @ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional max); +@Namespace("at") public static native @ByVal Tensor clip(@Const @ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional max); @Namespace("at") public static native @ByVal Tensor clip(@Const @ByRef Tensor self, @Const @ByRef ScalarOptional min); // aten::clip.Tensor(Tensor self, Tensor? min=None, Tensor? max=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor clip(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional min, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional max); +@Namespace("at") public static native @ByVal Tensor clip(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional{}") TensorOptional min, @Const @ByRef(nullValue = "std::optional{}") TensorOptional max); @Namespace("at") public static native @ByVal Tensor clip(@Const @ByRef Tensor self); // aten::clip_(Tensor(a!) self, Scalar? min=None, Scalar? max=None) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor clip_(@ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional max); +@Namespace("at") public static native @ByRef Tensor clip_(@ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional max); @Namespace("at") public static native @ByRef Tensor clip_(@ByRef Tensor self, @Const @ByRef ScalarOptional min); // aten::clip_.Tensor(Tensor(a!) self, Tensor? min=None, Tensor? max=None) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor clip_(@ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional min, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional max); +@Namespace("at") public static native @ByRef Tensor clip_(@ByRef Tensor self, @Const @ByRef(nullValue = "std::optional{}") TensorOptional min, @Const @ByRef(nullValue = "std::optional{}") TensorOptional max); @Namespace("at") public static native @ByRef Tensor clip_(@ByRef Tensor self); // aten::clip.out(Tensor self, Scalar? min=None, Scalar? max=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor clip_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional max); +@Namespace("at") public static native @ByRef Tensor clip_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional max); @Namespace("at") public static native @ByRef Tensor clip_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef ScalarOptional min); // aten::clip.out(Tensor self, Scalar? min=None, Scalar? max=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor clip_outf(@Const @ByRef Tensor self, @Const @ByRef ScalarOptional min, @Const @ByRef ScalarOptional max, @ByRef Tensor out); // aten::clip.Tensor_out(Tensor self, Tensor? min=None, Tensor? max=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor clip_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional min, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional max); +@Namespace("at") public static native @ByRef Tensor clip_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional{}") TensorOptional min, @Const @ByRef(nullValue = "std::optional{}") TensorOptional max); @Namespace("at") public static native @ByRef Tensor clip_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::clip.Tensor_out(Tensor self, Tensor? min=None, Tensor? max=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor clip_outf(@Const @ByRef Tensor self, @Const @ByRef TensorOptional min, @Const @ByRef TensorOptional max, @ByRef Tensor out); @@ -24304,11 +24219,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::clone(Tensor self, *, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor clone(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor clone(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor clone(@Const @ByRef Tensor self); // aten::clone.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor clone_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor clone_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor clone_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::clone.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor clone_outf(@Const @ByRef Tensor self, @ByVal MemoryFormatOptional memory_format, @ByRef Tensor out); @@ -24855,13 +24770,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::conv1d(Tensor input, Tensor weight, Tensor? bias=None, SymInt[1] stride=1, SymInt[1] padding=0, SymInt[1] dilation=1, SymInt groups=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv1d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/); +@Namespace("at") public static native @ByVal Tensor conv1d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/); @Namespace("at") public static native @ByVal Tensor conv1d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); -@Namespace("at") public static native @ByVal Tensor conv1d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/); +@Namespace("at") public static native @ByVal Tensor conv1d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/); // aten::conv1d(Tensor input, Tensor weight, Tensor? bias=None, SymInt[1] stride=1, SymInt[1] padding=0, SymInt[1] dilation=1, SymInt groups=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv1d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups); +@Namespace("at") public static native @ByVal Tensor conv1d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups); @Namespace("at") public static native @ByVal Tensor conv1d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); @@ -24911,13 +24826,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::conv2d(Tensor input, Tensor weight, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] dilation=1, SymInt groups=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv2d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/); +@Namespace("at") public static native @ByVal Tensor conv2d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/); @Namespace("at") public static native @ByVal Tensor conv2d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); -@Namespace("at") public static native @ByVal Tensor conv2d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/); +@Namespace("at") public static native @ByVal Tensor conv2d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/); // aten::conv2d(Tensor input, Tensor weight, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] dilation=1, SymInt groups=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv2d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups); +@Namespace("at") public static native @ByVal Tensor conv2d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups); @Namespace("at") public static native @ByVal Tensor conv2d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); @@ -24967,13 +24882,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::conv3d(Tensor input, Tensor weight, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] dilation=1, SymInt groups=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv3d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/); +@Namespace("at") public static native @ByVal Tensor conv3d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/); @Namespace("at") public static native @ByVal Tensor conv3d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); -@Namespace("at") public static native @ByVal Tensor conv3d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/); +@Namespace("at") public static native @ByVal Tensor conv3d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/); // aten::conv3d(Tensor input, Tensor weight, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] dilation=1, SymInt groups=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv3d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups); +@Namespace("at") public static native @ByVal Tensor conv3d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups); @Namespace("at") public static native @ByVal Tensor conv3d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); @@ -25144,13 +25059,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::conv_transpose1d(Tensor input, Tensor weight, Tensor? bias=None, SymInt[1] stride=1, SymInt[1] padding=0, SymInt[1] output_padding=0, SymInt groups=1, SymInt[1] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv_transpose1d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor conv_transpose1d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); @Namespace("at") public static native @ByVal Tensor conv_transpose1d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); -@Namespace("at") public static native @ByVal Tensor conv_transpose1d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); +@Namespace("at") public static native @ByVal Tensor conv_transpose1d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); // aten::conv_transpose1d(Tensor input, Tensor weight, Tensor? bias=None, SymInt[1] stride=1, SymInt[1] padding=0, SymInt[1] output_padding=0, SymInt groups=1, SymInt[1] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv_transpose1d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor conv_transpose1d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); @Namespace("at") public static native @ByVal Tensor conv_transpose1d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); @@ -25182,13 +25097,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::conv_transpose2d.input(Tensor input, Tensor weight, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] output_padding=0, SymInt groups=1, SymInt[2] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv_transpose2d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor conv_transpose2d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); @Namespace("at") public static native @ByVal Tensor conv_transpose2d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); -@Namespace("at") public static native @ByVal Tensor conv_transpose2d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); +@Namespace("at") public static native @ByVal Tensor conv_transpose2d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); // aten::conv_transpose2d.input(Tensor input, Tensor weight, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] output_padding=0, SymInt groups=1, SymInt[2] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv_transpose2d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor conv_transpose2d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); @Namespace("at") public static native @ByVal Tensor conv_transpose2d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); @@ -25220,13 +25135,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::conv_transpose3d.input(Tensor input, Tensor weight, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] output_padding=0, SymInt groups=1, SymInt[3] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv_transpose3d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor conv_transpose3d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); @Namespace("at") public static native @ByVal Tensor conv_transpose3d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); -@Namespace("at") public static native @ByVal Tensor conv_transpose3d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); +@Namespace("at") public static native @ByVal Tensor conv_transpose3d(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); // aten::conv_transpose3d.input(Tensor input, Tensor weight, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] output_padding=0, SymInt groups=1, SymInt[3] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor conv_transpose3d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor conv_transpose3d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); @Namespace("at") public static native @ByVal Tensor conv_transpose3d_symint(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); @@ -25773,7 +25688,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByVal Tensor count_nonzero(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dim); // aten::count_nonzero(Tensor self, int? dim=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor count_nonzero(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim); +@Namespace("at") public static native @ByVal Tensor count_nonzero(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim); @Namespace("at") public static native @ByVal Tensor count_nonzero(@Const @ByRef Tensor self); // aten::count_nonzero.dim_IntList_out(Tensor self, int[] dim, *, Tensor(a!) out) -> Tensor(a!) @@ -25784,7 +25699,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByRef Tensor count_nonzero_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByRef Tensor out); // aten::count_nonzero.out(Tensor self, int? dim=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor count_nonzero_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim); +@Namespace("at") public static native @ByRef Tensor count_nonzero_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim); @Namespace("at") public static native @ByRef Tensor count_nonzero_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::count_nonzero.out(Tensor self, int? dim=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor count_nonzero_outf(@Const @ByRef Tensor self, @ByVal LongOptional dim, @ByRef Tensor out); @@ -25817,7 +25732,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::cov(Tensor self, *, int correction=1, Tensor? fweights=None, Tensor? aweights=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor cov(@Const @ByRef Tensor self, @Cast("int64_t") long correction/*=1*/, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional fweights, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional aweights); +@Namespace("at") public static native @ByVal Tensor cov(@Const @ByRef Tensor self, @Cast("int64_t") long correction/*=1*/, @Const @ByRef(nullValue = "std::optional{}") TensorOptional fweights, @Const @ByRef(nullValue = "std::optional{}") TensorOptional aweights); @Namespace("at") public static native @ByVal Tensor cov(@Const @ByRef Tensor self); @@ -25848,13 +25763,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::cross.out(Tensor self, Tensor other, int? dim=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor cross_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim); +@Namespace("at") public static native @ByRef Tensor cross_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim); @Namespace("at") public static native @ByRef Tensor cross_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor other); // aten::cross.out(Tensor self, Tensor other, int? dim=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor cross_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal LongOptional dim, @ByRef Tensor out); // aten::cross(Tensor self, Tensor other, int? dim=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor cross(@Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim); +@Namespace("at") public static native @ByVal Tensor cross(@Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim); @Namespace("at") public static native @ByVal Tensor cross(@Const @ByRef Tensor self, @Const @ByRef Tensor other); @@ -25885,12 +25800,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor -@Namespace("at") public static native @ByVal Tensor cross_entropy_loss(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/, double label_smoothing/*=0.0*/); +@Namespace("at") public static native @ByVal Tensor cross_entropy_loss(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/, double label_smoothing/*=0.0*/); @Namespace("at") public static native @ByVal Tensor cross_entropy_loss(@Const @ByRef Tensor self, @Const @ByRef Tensor target); // aten::cross_entropy_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, float label_smoothing=0.0) -> Tensor -@Namespace("at") public static native @ByVal Tensor cross_entropy_loss_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index, double label_smoothing/*=0.0*/); +@Namespace("at") public static native @ByVal Tensor cross_entropy_loss_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index, double label_smoothing/*=0.0*/); @Namespace("at") public static native @ByVal Tensor cross_entropy_loss_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target); @@ -26594,21 +26509,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::cumprod(Tensor self, int dim, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor cumprod(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor cumprod(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor cumprod(@Const @ByRef Tensor self, @Cast("int64_t") long dim); // aten::cumprod.out(Tensor self, int dim, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor cumprod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor cumprod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor cumprod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim); // aten::cumprod.out(Tensor self, int dim, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor cumprod_outf(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); // aten::cumprod.dimname(Tensor self, Dimname dim, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor cumprod(@Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor cumprod(@Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor cumprod(@Const @ByRef Tensor self, @ByVal Dimname dim); // aten::cumprod.dimname_out(Tensor self, Dimname dim, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor cumprod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor cumprod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor cumprod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal Dimname dim); // aten::cumprod.dimname_out(Tensor self, Dimname dim, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor cumprod_outf(@Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @@ -26671,21 +26586,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::cumsum(Tensor self, int dim, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor cumsum(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor cumsum(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor cumsum(@Const @ByRef Tensor self, @Cast("int64_t") long dim); // aten::cumsum.out(Tensor self, int dim, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor cumsum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor cumsum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor cumsum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim); // aten::cumsum.out(Tensor self, int dim, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor cumsum_outf(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); // aten::cumsum.dimname(Tensor self, Dimname dim, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor cumsum(@Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor cumsum(@Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor cumsum(@Const @ByRef Tensor self, @ByVal Dimname dim); // aten::cumsum.dimname_out(Tensor self, Dimname dim, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor cumsum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor cumsum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor cumsum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal Dimname dim); // aten::cumsum.dimname_out(Tensor self, Dimname dim, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor cumsum_outf(@Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @@ -27259,11 +27174,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::diff(Tensor self, int n=1, int dim=-1, Tensor? prepend=None, Tensor? append=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor diff(@Const @ByRef Tensor self, @Cast("int64_t") long n/*=1*/, @Cast("int64_t") long dim/*=-1*/, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional prepend, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional append); +@Namespace("at") public static native @ByVal Tensor diff(@Const @ByRef Tensor self, @Cast("int64_t") long n/*=1*/, @Cast("int64_t") long dim/*=-1*/, @Const @ByRef(nullValue = "std::optional{}") TensorOptional prepend, @Const @ByRef(nullValue = "std::optional{}") TensorOptional append); @Namespace("at") public static native @ByVal Tensor diff(@Const @ByRef Tensor self); // aten::diff.out(Tensor self, int n=1, int dim=-1, Tensor? prepend=None, Tensor? append=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor diff_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long n/*=1*/, @Cast("int64_t") long dim/*=-1*/, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional prepend, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional append); +@Namespace("at") public static native @ByRef Tensor diff_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long n/*=1*/, @Cast("int64_t") long dim/*=-1*/, @Const @ByRef(nullValue = "std::optional{}") TensorOptional prepend, @Const @ByRef(nullValue = "std::optional{}") TensorOptional append); @Namespace("at") public static native @ByRef Tensor diff_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::diff.out(Tensor self, int n=1, int dim=-1, Tensor? prepend=None, Tensor? append=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor diff_outf(@Const @ByRef Tensor self, @Cast("int64_t") long n, @Cast("int64_t") long dim, @Const @ByRef TensorOptional prepend, @Const @ByRef TensorOptional append, @ByRef Tensor out); @@ -27616,13 +27531,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::einsum(str equation, Tensor[] tensors, *, int[]? path=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor einsum(@StringView BytePointer equation, @ByVal TensorArrayRef tensors, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional path); +@Namespace("at") public static native @ByVal Tensor einsum(@StringView BytePointer equation, @ByVal TensorArrayRef tensors, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional path); @Namespace("at") public static native @ByVal Tensor einsum(@StringView BytePointer equation, @ByVal TensorArrayRef tensors); -@Namespace("at") public static native @ByVal Tensor einsum(@StringView String equation, @ByVal TensorVector tensors, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... path); +@Namespace("at") public static native @ByVal Tensor einsum(@StringView String equation, @ByVal TensorVector tensors, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... path); @Namespace("at") public static native @ByVal Tensor einsum(@StringView String equation, @ByVal TensorVector tensors); -@Namespace("at") public static native @ByVal Tensor einsum(@StringView BytePointer equation, @ByVal TensorVector tensors, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional path); +@Namespace("at") public static native @ByVal Tensor einsum(@StringView BytePointer equation, @ByVal TensorVector tensors, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional path); @Namespace("at") public static native @ByVal Tensor einsum(@StringView BytePointer equation, @ByVal TensorVector tensors); -@Namespace("at") public static native @ByVal Tensor einsum(@StringView String equation, @ByVal TensorArrayRef tensors, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... path); +@Namespace("at") public static native @ByVal Tensor einsum(@StringView String equation, @ByVal TensorArrayRef tensors, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... path); @Namespace("at") public static native @ByVal Tensor einsum(@StringView String equation, @ByVal TensorArrayRef tensors); @@ -27819,7 +27734,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq=False, int mode=0, bool sparse=False, Tensor? per_sample_weights=None, bool include_last_offset=False) -> (Tensor, Tensor, Tensor, Tensor) -@Namespace("at") public static native @ByVal T_TensorTensorTensorTensor_T embedding_bag(@Const @ByRef Tensor weight, @Const @ByRef Tensor indices, @Const @ByRef Tensor offsets, @Cast("bool") boolean scale_grad_by_freq/*=false*/, @Cast("int64_t") long mode/*=0*/, @Cast("bool") boolean sparse/*=false*/, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional per_sample_weights, @Cast("bool") boolean include_last_offset/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensorTensorTensor_T embedding_bag(@Const @ByRef Tensor weight, @Const @ByRef Tensor indices, @Const @ByRef Tensor offsets, @Cast("bool") boolean scale_grad_by_freq/*=false*/, @Cast("int64_t") long mode/*=0*/, @Cast("bool") boolean sparse/*=false*/, @Const @ByRef(nullValue = "std::optional{}") TensorOptional per_sample_weights, @Cast("bool") boolean include_last_offset/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensorTensorTensor_T embedding_bag(@Const @ByRef Tensor weight, @Const @ByRef Tensor indices, @Const @ByRef Tensor offsets); // aten::embedding_bag.padding_idx(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq, int mode, bool sparse, Tensor? per_sample_weights, bool include_last_offset, int? padding_idx) -> (Tensor, Tensor, Tensor, Tensor) @@ -27972,18 +27887,18 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::empty.names(int[] size, *, Dimname[]? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor empty(@ByVal LongArrayRef size, @ByVal DimnameListOptional names, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor empty(@ByVal LongArrayRef size, @ByVal DimnameListOptional names, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor empty(@ByVal LongArrayRef size, @ByVal DimnameListOptional names); -@Namespace("at") public static native @ByVal Tensor empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal DimnameListOptional names, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal DimnameListOptional names, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal DimnameListOptional names); // aten::empty.names(int[] size, *, Dimname[]? names, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor @Namespace("at") public static native @ByVal Tensor empty(@ByVal LongArrayRef size, @ByVal DimnameListOptional names, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal DimnameListOptional names, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal MemoryFormatOptional memory_format); // aten::empty.memory_format(SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor empty(@ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor empty(@ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor empty(@ByVal LongArrayRef size); -@Namespace("at") public static native @ByVal Tensor empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); @@ -27993,7 +27908,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::empty.memory_format(SymInt[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor empty_symint(@ByVal SymIntArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor empty_symint(@ByVal SymIntArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor empty_symint(@ByVal SymIntArrayRef size); @@ -28002,9 +27917,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::empty.out(SymInt[] size, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal LongArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal LongArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal LongArrayRef size); -@Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); @@ -28014,7 +27929,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::empty.out(SymInt[] size, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor empty_symint_out(@ByRef Tensor out, @ByVal SymIntArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor empty_symint_out(@ByRef Tensor out, @ByVal SymIntArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor empty_symint_out(@ByRef Tensor out, @ByVal SymIntArrayRef size); @@ -28023,9 +27938,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::empty.names_out(int[] size, *, Dimname[]? names, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal LongArrayRef size, @ByVal DimnameListOptional names, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal LongArrayRef size, @ByVal DimnameListOptional names, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal LongArrayRef size, @ByVal DimnameListOptional names); -@Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal DimnameListOptional names, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal DimnameListOptional names, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor empty_out(@ByRef Tensor out, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal DimnameListOptional names); // aten::empty.names_out(int[] size, *, Dimname[]? names, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor empty_outf(@ByVal LongArrayRef size, @ByVal DimnameListOptional names, @ByVal MemoryFormatOptional memory_format, @ByRef Tensor out); @@ -28059,13 +27974,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::empty_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor empty_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor empty_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor empty_like(@Const @ByRef Tensor self); // aten::empty_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor @Namespace("at") public static native @ByVal Tensor empty_like(@Const @ByRef Tensor self, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal MemoryFormatOptional memory_format); // aten::empty_like.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor empty_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor empty_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor empty_like_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::empty_like.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor empty_like_outf(@Const @ByRef Tensor self, @ByVal MemoryFormatOptional memory_format, @ByRef Tensor out); @@ -28169,18 +28084,18 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::empty_quantized(int[] size, Tensor qtensor, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor empty_quantized(@ByVal LongArrayRef size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor empty_quantized(@ByVal LongArrayRef size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor empty_quantized(@ByVal LongArrayRef size, @Const @ByRef Tensor qtensor); -@Namespace("at") public static native @ByVal Tensor empty_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor empty_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor empty_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor qtensor); // aten::empty_quantized(int[] size, Tensor qtensor, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor @Namespace("at") public static native @ByVal Tensor empty_quantized(@ByVal LongArrayRef size, @Const @ByRef Tensor qtensor, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor empty_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor qtensor, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal MemoryFormatOptional memory_format); // aten::empty_quantized.out(int[] size, Tensor qtensor, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor empty_quantized_out(@ByRef Tensor out, @ByVal LongArrayRef size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor empty_quantized_out(@ByRef Tensor out, @ByVal LongArrayRef size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor empty_quantized_out(@ByRef Tensor out, @ByVal LongArrayRef size, @Const @ByRef Tensor qtensor); -@Namespace("at") public static native @ByRef Tensor empty_quantized_out(@ByRef Tensor out, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor empty_quantized_out(@ByRef Tensor out, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor empty_quantized_out(@ByRef Tensor out, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor qtensor); // aten::empty_quantized.out(int[] size, Tensor qtensor, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor empty_quantized_outf(@ByVal LongArrayRef size, @Const @ByRef Tensor qtensor, @ByVal MemoryFormatOptional memory_format, @ByRef Tensor out); @@ -28694,13 +28609,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::exponential.out(Tensor self, float lambd=1, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor exponential_out(@ByRef Tensor out, @Const @ByRef Tensor self, double lambd/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor exponential_out(@ByRef Tensor out, @Const @ByRef Tensor self, double lambd/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor exponential_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::exponential.out(Tensor self, float lambd=1, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor exponential_outf(@Const @ByRef Tensor self, double lambd, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::exponential(Tensor self, float lambd=1, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor exponential(@Const @ByRef Tensor self, double lambd/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor exponential(@Const @ByRef Tensor self, double lambd/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor exponential(@Const @ByRef Tensor self); @@ -29298,17 +29213,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_fft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_fft(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fft(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_fft(@Const @ByRef Tensor self); // aten::fft_fft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_fft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_fft_symint(@Const @ByRef Tensor self); // aten::fft_fft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_fft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_fft_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -29317,7 +29232,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_fft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_fft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_fft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -29353,25 +29268,25 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_fft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_fft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_fft2(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_fft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_fft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_fft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_fft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_fft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_fft2_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_fft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_fft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_fft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_fft2_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_fft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByRef Tensor fft_fft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByRef Tensor fft_fft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_fft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29382,9 +29297,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_fft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_fft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_fft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_fft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_fft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29459,21 +29374,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_fftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_fftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_fftn(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_fftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_fftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_fftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_fftn_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_fftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_fftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_fftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_fftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_fftn_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_fftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_fftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29482,9 +29397,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_fftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_fftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_fftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_fftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_fftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_fftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29520,9 +29435,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_fftshift(Tensor self, int[1]? dim=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_fftshift(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim); +@Namespace("at") public static native @ByVal Tensor fft_fftshift(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim); @Namespace("at") public static native @ByVal Tensor fft_fftshift(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_fftshift(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dim); +@Namespace("at") public static native @ByVal Tensor fft_fftshift(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dim); @@ -29552,17 +29467,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_hfft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_hfft(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfft(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_hfft(@Const @ByRef Tensor self); // aten::fft_hfft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_hfft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_hfft_symint(@Const @ByRef Tensor self); // aten::fft_hfft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_hfft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_hfft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_hfft_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -29571,7 +29486,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_hfft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_hfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_hfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_hfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -29607,25 +29522,25 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_hfft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_hfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_hfft2(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_hfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_hfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_hfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_hfft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_hfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_hfft2_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_hfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_hfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_hfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29636,9 +29551,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_hfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_hfft2_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_hfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29674,21 +29589,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_hfftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_hfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_hfftn(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_hfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_hfftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_hfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_hfftn_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_hfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_hfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_hfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor fft_hfftn_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_hfftn_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @Const @ByRef Tensor fft_hfftn_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @Const @ByRef Tensor fft_hfftn_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_hfftn_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_hfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29697,9 +29612,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_hfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor fft_hfftn_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_hfftn_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @Const @ByRef Tensor fft_hfftn_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @Const @ByRef Tensor fft_hfftn_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_hfftn_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_hfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29735,17 +29650,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ifft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ifft(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifft(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ifft(@Const @ByRef Tensor self); // aten::fft_ifft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ifft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ifft_symint(@Const @ByRef Tensor self); // aten::fft_ifft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_ifft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_ifft_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -29754,7 +29669,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ifft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_ifft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_ifft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -29790,25 +29705,25 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ifft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ifft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ifft2(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_ifft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_ifft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_ifft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ifft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ifft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ifft2_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_ifft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ifft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_ifft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_ifft2_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_ifft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByRef Tensor fft_ifft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByRef Tensor fft_ifft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ifft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29819,9 +29734,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ifft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_ifft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_ifft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_ifft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ifft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29857,21 +29772,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ifftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ifftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ifftn(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_ifftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ifftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ifftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ifftn_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_ifftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ifftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ifftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_ifftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_ifftn_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_ifftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ifftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29880,9 +29795,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ifftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_ifftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_ifftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_ifftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ifftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ifftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -29918,9 +29833,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ifftshift(Tensor self, int[1]? dim=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ifftshift(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim); +@Namespace("at") public static native @ByVal Tensor fft_ifftshift(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim); @Namespace("at") public static native @ByVal Tensor fft_ifftshift(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_ifftshift(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dim); +@Namespace("at") public static native @ByVal Tensor fft_ifftshift(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dim); @@ -29950,17 +29865,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ihfft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ihfft(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfft(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ihfft(@Const @ByRef Tensor self); // aten::fft_ihfft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ihfft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ihfft_symint(@Const @ByRef Tensor self); // aten::fft_ihfft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_ihfft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ihfft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_ihfft_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -29969,7 +29884,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ihfft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_ihfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_ihfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_ihfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -30005,25 +29920,25 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ihfft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ihfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ihfft2(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_ihfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_ihfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_ihfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ihfft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ihfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ihfft2_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_ihfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ihfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ihfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -30034,9 +29949,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ihfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_ihfft2_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ihfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -30072,21 +29987,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ihfftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ihfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ihfftn(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_ihfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ihfftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_ihfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_ihfftn_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_ihfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_ihfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ihfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor fft_ihfftn_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_ihfftn_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @Const @ByRef Tensor fft_ihfftn_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @Const @ByRef Tensor fft_ihfftn_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_ihfftn_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ihfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -30095,9 +30010,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_ihfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor fft_ihfftn_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_ihfftn_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @Const @ByRef Tensor fft_ihfftn_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @Const @ByRef Tensor fft_ihfftn_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @Const @ByRef Tensor fft_ihfftn_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_ihfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -30133,17 +30048,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_irfft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_irfft(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfft(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_irfft(@Const @ByRef Tensor self); // aten::fft_irfft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_irfft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_irfft_symint(@Const @ByRef Tensor self); // aten::fft_irfft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_irfft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_irfft_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -30152,7 +30067,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_irfft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_irfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_irfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -30188,25 +30103,25 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_irfft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_irfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_irfft2(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_irfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_irfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_irfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_irfft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_irfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_irfft2_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_irfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_irfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_irfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_irfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_irfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByRef Tensor fft_irfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByRef Tensor fft_irfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_irfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -30217,9 +30132,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_irfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_irfft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_irfft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_irfft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_irfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -30255,21 +30170,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_irfftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_irfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_irfftn(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_irfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_irfftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_irfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_irfftn_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_irfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_irfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_irfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_irfftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_irfftn_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_irfftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_irfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -30278,9 +30193,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_irfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_irfftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_irfftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_irfftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_irfftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_irfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -30316,17 +30231,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_rfft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_rfft(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfft(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_rfft(@Const @ByRef Tensor self); // aten::fft_rfft(Tensor self, SymInt? n=None, int dim=-1, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_rfft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfft_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_rfft_symint(@Const @ByRef Tensor self); // aten::fft_rfft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_rfft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfft_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_rfft_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -30335,7 +30250,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_rfft.out(Tensor self, SymInt? n=None, int dim=-1, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_rfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional n, @Cast("int64_t") long dim/*=-1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_rfft_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -30371,25 +30286,25 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_rfft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_rfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_rfft2(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_rfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_rfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByVal Tensor fft_rfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfft2(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_rfft2(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_rfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_rfft2_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_rfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfft2_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_rfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_rfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_rfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_rfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByRef Tensor fft_rfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); -@Namespace("at") public static native @ByRef Tensor fft_rfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfft2_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_rfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -30400,9 +30315,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_rfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_rfft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_rfft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_rfft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfft2_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_rfft2.out(Tensor self, SymInt[1]? s=None, int[1] dim=[-2,-1], str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -30477,21 +30392,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_rfftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_rfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_rfftn(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_rfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfftn(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_rfftn(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor fft_rfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByVal Tensor fft_rfftn_symint(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor fft_rfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByVal Tensor fft_rfftn_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_rfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_rfftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_rfftn_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_rfftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfftn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_rfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -30500,9 +30415,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::fft_rfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor fft_rfftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); @Namespace("at") public static native @ByRef Tensor fft_rfftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor fft_rfftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm); +@Namespace("at") public static native @ByRef Tensor fft_rfftn_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional s, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional norm); // aten::fft_rfftn.out(Tensor self, SymInt[1]? s=None, int[1]? dim=None, str? norm=None, *, Tensor(a!) out) -> Tensor(a!) @@ -31331,7 +31246,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @ByVal LongArrayRef strides, PointerConsumer deleter, @Const @ByRef(nullValue = "c10::TensorOptions{}") TensorOptions options, - @Const @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional target_device); + @Const @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional target_device); @Namespace("at") public static native @ByVal Tensor from_blob( Pointer data, @ByVal LongArrayRef sizes, @@ -31343,7 +31258,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] strides, PointerConsumer deleter, @Const @ByRef(nullValue = "c10::TensorOptions{}") TensorOptions options, - @Const @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional target_device); + @Const @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional target_device); @Namespace("at") public static native @ByVal Tensor from_blob( Pointer data, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] sizes, @@ -31357,7 +31272,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Cast("int64_t") long storage_offset, PointerConsumer deleter, @Const @ByRef(nullValue = "c10::TensorOptions{}") TensorOptions options, - @Const @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional target_device); + @Const @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional target_device); @Namespace("at") public static native @ByVal Tensor from_blob( Pointer data, @ByVal LongArrayRef sizes, @@ -31371,7 +31286,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Cast("int64_t") long storage_offset, PointerConsumer deleter, @Const @ByRef(nullValue = "c10::TensorOptions{}") TensorOptions options, - @Const @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional target_device); + @Const @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional target_device); @Namespace("at") public static native @ByVal Tensor from_blob( Pointer data, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] sizes, @@ -31384,7 +31299,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @ByVal LongArrayRef sizes, PointerConsumer deleter, @Const @ByRef(nullValue = "c10::TensorOptions{}") TensorOptions options, - @Const @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional target_device); + @Const @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional target_device); @Namespace("at") public static native @ByVal Tensor from_blob( Pointer data, @ByVal LongArrayRef sizes, @@ -31394,7 +31309,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] sizes, PointerConsumer deleter, @Const @ByRef(nullValue = "c10::TensorOptions{}") TensorOptions options, - @Const @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional target_device); + @Const @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional target_device); @Namespace("at") public static native @ByVal Tensor from_blob( Pointer data, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] sizes, @@ -31462,18 +31377,18 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::from_file(str filename, bool? shared=None, int? size=0, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor from_file(@StringView BytePointer filename, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional shared, @ByVal(nullValue = "c10::optional(0)") LongOptional size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); +@Namespace("at") public static native @ByVal Tensor from_file(@StringView BytePointer filename, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional shared, @ByVal(nullValue = "std::optional(0)") LongOptional size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("at") public static native @ByVal Tensor from_file(@StringView BytePointer filename); -@Namespace("at") public static native @ByVal Tensor from_file(@StringView String filename, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional shared, @ByVal(nullValue = "c10::optional(0)") LongOptional size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); +@Namespace("at") public static native @ByVal Tensor from_file(@StringView String filename, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional shared, @ByVal(nullValue = "std::optional(0)") LongOptional size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("at") public static native @ByVal Tensor from_file(@StringView String filename); // aten::from_file(str filename, bool? shared=None, int? size=0, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor @Namespace("at") public static native @ByVal Tensor from_file(@StringView BytePointer filename, @ByVal BoolOptional shared, @ByVal LongOptional size, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory); @Namespace("at") public static native @ByVal Tensor from_file(@StringView String filename, @ByVal BoolOptional shared, @ByVal LongOptional size, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory); // aten::from_file.out(str filename, bool? shared=None, int? size=0, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor from_file_out(@ByRef Tensor out, @StringView BytePointer filename, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional shared, @ByVal(nullValue = "c10::optional(0)") LongOptional size); +@Namespace("at") public static native @ByRef Tensor from_file_out(@ByRef Tensor out, @StringView BytePointer filename, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional shared, @ByVal(nullValue = "std::optional(0)") LongOptional size); @Namespace("at") public static native @ByRef Tensor from_file_out(@ByRef Tensor out, @StringView BytePointer filename); -@Namespace("at") public static native @ByRef Tensor from_file_out(@ByRef Tensor out, @StringView String filename, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional shared, @ByVal(nullValue = "c10::optional(0)") LongOptional size); +@Namespace("at") public static native @ByRef Tensor from_file_out(@ByRef Tensor out, @StringView String filename, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional shared, @ByVal(nullValue = "std::optional(0)") LongOptional size); @Namespace("at") public static native @ByRef Tensor from_file_out(@ByRef Tensor out, @StringView String filename); // aten::from_file.out(str filename, bool? shared=None, int? size=0, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor from_file_outf(@StringView BytePointer filename, @ByVal BoolOptional shared, @ByVal LongOptional size, @ByRef Tensor out); @@ -31589,13 +31504,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::full_like(Tensor self, Scalar fill_value, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor full_like(@Const @ByRef Tensor self, @Const @ByRef Scalar fill_value, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor full_like(@Const @ByRef Tensor self, @Const @ByRef Scalar fill_value, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor full_like(@Const @ByRef Tensor self, @Const @ByRef Scalar fill_value); // aten::full_like(Tensor self, Scalar fill_value, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor @Namespace("at") public static native @ByVal Tensor full_like(@Const @ByRef Tensor self, @Const @ByRef Scalar fill_value, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal MemoryFormatOptional memory_format); // aten::full_like.out(Tensor self, Scalar fill_value, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor full_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Scalar fill_value, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor full_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Scalar fill_value, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor full_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Scalar fill_value); // aten::full_like.out(Tensor self, Scalar fill_value, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor full_like_outf(@Const @ByRef Tensor self, @Const @ByRef Scalar fill_value, @ByVal MemoryFormatOptional memory_format, @ByRef Tensor out); @@ -31902,13 +31817,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::geometric.out(Tensor self, float p, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor geometric_out(@ByRef Tensor out, @Const @ByRef Tensor self, double p, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor geometric_out(@ByRef Tensor out, @Const @ByRef Tensor self, double p, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor geometric_out(@ByRef Tensor out, @Const @ByRef Tensor self, double p); // aten::geometric.out(Tensor self, float p, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor geometric_outf(@Const @ByRef Tensor self, double p, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::geometric(Tensor self, float p, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor geometric(@Const @ByRef Tensor self, double p, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor geometric(@Const @ByRef Tensor self, double p, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor geometric(@Const @ByRef Tensor self, double p); @@ -32151,7 +32066,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::gradient.scalarint(Tensor self, *, Scalar? spacing=None, int? dim=None, int edge_order=1) -> Tensor[] -@Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional spacing, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("int64_t") long edge_order/*=1*/); +@Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional spacing, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("int64_t") long edge_order/*=1*/); @Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self); // aten::gradient.scalararray(Tensor self, *, Scalar spacing, int[] dim, int edge_order=1) -> Tensor[] @@ -32167,7 +32082,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dim); // aten::gradient.scalarrayint(Tensor self, *, Scalar[] spacing, int? dim=None, int edge_order=1) -> Tensor[] -@Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @ByVal ScalarArrayRef spacing, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("int64_t") long edge_order/*=1*/); +@Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @ByVal ScalarArrayRef spacing, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("int64_t") long edge_order/*=1*/); @Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @ByVal ScalarArrayRef spacing); // aten::gradient.scalarrayarray(Tensor self, *, Scalar[] spacing, int[] dim, int edge_order=1) -> Tensor[] @@ -32177,9 +32092,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @ByVal ScalarArrayRef spacing, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dim); // aten::gradient.tensorarrayint(Tensor self, *, Tensor[] spacing, int? dim=None, int edge_order=1) -> Tensor[] -@Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @ByVal TensorArrayRef spacing, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("int64_t") long edge_order/*=1*/); +@Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @ByVal TensorArrayRef spacing, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("int64_t") long edge_order/*=1*/); @Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @ByVal TensorArrayRef spacing); -@Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @ByVal TensorVector spacing, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("int64_t") long edge_order/*=1*/); +@Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @ByVal TensorVector spacing, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("int64_t") long edge_order/*=1*/); @Namespace("at") public static native @ByVal TensorVector gradient(@Const @ByRef Tensor self, @ByVal TensorVector spacing); // aten::gradient.tensorarray(Tensor self, *, Tensor[] spacing, int[] dim, int edge_order=1) -> Tensor[] @@ -32472,7 +32387,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::group_norm(Tensor input, int num_groups, Tensor? weight=None, Tensor? bias=None, float eps=1e-05, bool cudnn_enabled=True) -> Tensor -@Namespace("at") public static native @ByVal Tensor group_norm(@Const @ByRef Tensor input, @Cast("int64_t") long num_groups, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, double eps/*=1e-05*/, @Cast("bool") boolean cudnn_enabled/*=true*/); +@Namespace("at") public static native @ByVal Tensor group_norm(@Const @ByRef Tensor input, @Cast("int64_t") long num_groups, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, double eps/*=1e-05*/, @Cast("bool") boolean cudnn_enabled/*=true*/); @Namespace("at") public static native @ByVal Tensor group_norm(@Const @ByRef Tensor input, @Cast("int64_t") long num_groups); @@ -32538,7 +32453,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::gru_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor gru_cell(@Const @ByRef Tensor input, @Const @ByRef Tensor hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional b_ih, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional b_hh); +@Namespace("at") public static native @ByVal Tensor gru_cell(@Const @ByRef Tensor input, @Const @ByRef Tensor hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh, @Const @ByRef(nullValue = "std::optional{}") TensorOptional b_ih, @Const @ByRef(nullValue = "std::optional{}") TensorOptional b_hh); @Namespace("at") public static native @ByVal Tensor gru_cell(@Const @ByRef Tensor input, @Const @ByRef Tensor hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh); @@ -33129,27 +33044,27 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::histogram.bins_tensor_out(Tensor self, Tensor bins, *, Tensor? weight=None, bool density=False, Tensor(a!) hist, Tensor(b!) bin_edges) -> (Tensor(a!) hist, Tensor(b!) bin_edges) -@Namespace("at") public static native @ByVal T_TensorTensor_T histogram_out(@ByRef Tensor hist, @ByRef Tensor bin_edges, @Const @ByRef Tensor self, @Const @ByRef Tensor bins, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T histogram_out(@ByRef Tensor hist, @ByRef Tensor bin_edges, @Const @ByRef Tensor self, @Const @ByRef Tensor bins, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T histogram_out(@ByRef Tensor hist, @ByRef Tensor bin_edges, @Const @ByRef Tensor self, @Const @ByRef Tensor bins); // aten::histogram.bins_tensor_out(Tensor self, Tensor bins, *, Tensor? weight=None, bool density=False, Tensor(a!) hist, Tensor(b!) bin_edges) -> (Tensor(a!) hist, Tensor(b!) bin_edges) @Namespace("at") public static native @ByVal T_TensorTensor_T histogram_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor bins, @Const @ByRef TensorOptional weight, @Cast("bool") boolean density, @ByRef Tensor hist, @ByRef Tensor bin_edges); // aten::histogram.bins_tensor(Tensor self, Tensor bins, *, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor bin_edges) -@Namespace("at") public static native @ByVal T_TensorTensor_T histogram(@Const @ByRef Tensor self, @Const @ByRef Tensor bins, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T histogram(@Const @ByRef Tensor self, @Const @ByRef Tensor bins, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T histogram(@Const @ByRef Tensor self, @Const @ByRef Tensor bins); // aten::histogram.bin_ct_out(Tensor self, int bins=100, *, float[]? range=None, Tensor? weight=None, bool density=False, Tensor(a!) hist, Tensor(b!) bin_edges) -> (Tensor(a!) hist, Tensor(b!) bin_edges) -@Namespace("at") public static native @ByVal T_TensorTensor_T histogram_out(@ByRef Tensor hist, @ByRef Tensor bin_edges, @Const @ByRef Tensor self, @Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "c10::optional >(c10::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T histogram_out(@ByRef Tensor hist, @ByRef Tensor bin_edges, @Const @ByRef Tensor self, @Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "std::optional >(::std::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T histogram_out(@ByRef Tensor hist, @ByRef Tensor bin_edges, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal T_TensorTensor_T histogram_out(@ByRef Tensor hist, @ByRef Tensor bin_edges, @Const @ByRef Tensor self, @Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "c10::optional >(c10::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T histogram_out(@ByRef Tensor hist, @ByRef Tensor bin_edges, @Const @ByRef Tensor self, @Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "std::optional >(::std::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); // aten::histogram.bin_ct_out(Tensor self, int bins=100, *, float[]? range=None, Tensor? weight=None, bool density=False, Tensor(a!) hist, Tensor(b!) bin_edges) -> (Tensor(a!) hist, Tensor(b!) bin_edges) @Namespace("at") public static native @ByVal T_TensorTensor_T histogram_outf(@Const @ByRef Tensor self, @Cast("int64_t") long bins, @ByVal DoubleArrayRefOptional range, @Const @ByRef TensorOptional weight, @Cast("bool") boolean density, @ByRef Tensor hist, @ByRef Tensor bin_edges); @Namespace("at") public static native @ByVal T_TensorTensor_T histogram_outf(@Const @ByRef Tensor self, @Cast("int64_t") long bins, @ByVal @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef TensorOptional weight, @Cast("bool") boolean density, @ByRef Tensor hist, @ByRef Tensor bin_edges); // aten::histogram.bin_ct(Tensor self, int bins=100, *, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor bin_edges) -@Namespace("at") public static native @ByVal T_TensorTensor_T histogram(@Const @ByRef Tensor self, @Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "c10::optional >(c10::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T histogram(@Const @ByRef Tensor self, @Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "std::optional >(::std::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T histogram(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal T_TensorTensor_T histogram(@Const @ByRef Tensor self, @Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "c10::optional >(c10::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T histogram(@Const @ByRef Tensor self, @Cast("int64_t") long bins/*=100*/, @ByVal(nullValue = "std::optional >(::std::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); @@ -33179,25 +33094,25 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::histogramdd(Tensor self, int[] bins, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor[] bin_edges) -@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal LongArrayRef bins, @ByVal(nullValue = "c10::optional >(c10::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal LongArrayRef bins, @ByVal(nullValue = "std::optional >(::std::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal LongArrayRef bins); -@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] bins, @ByVal(nullValue = "c10::optional >(c10::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] bins, @ByVal(nullValue = "std::optional >(::std::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... bins); -@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] bins, @ByVal(nullValue = "c10::optional >(c10::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); -@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal LongArrayRef bins, @ByVal(nullValue = "c10::optional >(c10::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] bins, @ByVal(nullValue = "std::optional >(::std::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal LongArrayRef bins, @ByVal(nullValue = "std::optional >(::std::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); // aten::histogramdd.int_bins(Tensor self, int bins, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor[] bin_edges) -@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @Cast("int64_t") long bins, @ByVal(nullValue = "c10::optional >(c10::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @Cast("int64_t") long bins, @ByVal(nullValue = "std::optional >(::std::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @Cast("int64_t") long bins); -@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @Cast("int64_t") long bins, @ByVal(nullValue = "c10::optional >(c10::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @Cast("int64_t") long bins, @ByVal(nullValue = "std::optional >(::std::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); // aten::histogramdd.TensorList_bins(Tensor self, Tensor[] bins, float[]? range=None, Tensor? weight=None, bool density=False) -> (Tensor hist, Tensor[] bin_edges) -@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal TensorArrayRef bins, @ByVal(nullValue = "c10::optional >(c10::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal TensorArrayRef bins, @ByVal(nullValue = "std::optional >(::std::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal TensorArrayRef bins); -@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal TensorVector bins, @ByVal(nullValue = "c10::optional >(c10::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal TensorVector bins, @ByVal(nullValue = "std::optional >(::std::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal TensorVector bins); -@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal TensorVector bins, @ByVal(nullValue = "c10::optional >(c10::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); -@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal TensorArrayRef bins, @ByVal(nullValue = "c10::optional >(c10::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal TensorVector bins, @ByVal(nullValue = "std::optional >(::std::nullopt)") DoubleArrayRefOptional range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensorVector_T histogramdd(@Const @ByRef Tensor self, @ByVal TensorArrayRef bins, @ByVal(nullValue = "std::optional >(::std::nullopt)") @Cast({"double*", "c10::ArrayRef", "std::vector&"}) @StdVector double[] range, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("bool") boolean density/*=false*/); @@ -34870,7 +34785,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::istft(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool center=True, bool normalized=False, bool? onesided=None, int? length=None, bool return_complex=False) -> Tensor -@Namespace("at") public static native @ByVal Tensor istft(@Const @ByRef Tensor self, @Cast("int64_t") long n_fft, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional hop_length, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional onesided, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional length, @Cast("bool") boolean return_complex/*=false*/); +@Namespace("at") public static native @ByVal Tensor istft(@Const @ByRef Tensor self, @Cast("int64_t") long n_fft, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional hop_length, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "std::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional onesided, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional length, @Cast("bool") boolean return_complex/*=false*/); @Namespace("at") public static native @ByVal Tensor istft(@Const @ByRef Tensor self, @Cast("int64_t") long n_fft); @@ -35133,14 +35048,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::layer_norm(Tensor input, SymInt[] normalized_shape, Tensor? weight=None, Tensor? bias=None, float eps=1e-05, bool cudnn_enable=True) -> Tensor -@Namespace("at") public static native @ByVal Tensor layer_norm(@Const @ByRef Tensor input, @ByVal LongArrayRef normalized_shape, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, double eps/*=1e-05*/, @Cast("bool") boolean cudnn_enable/*=true*/); +@Namespace("at") public static native @ByVal Tensor layer_norm(@Const @ByRef Tensor input, @ByVal LongArrayRef normalized_shape, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, double eps/*=1e-05*/, @Cast("bool") boolean cudnn_enable/*=true*/); @Namespace("at") public static native @ByVal Tensor layer_norm(@Const @ByRef Tensor input, @ByVal LongArrayRef normalized_shape); -@Namespace("at") public static native @ByVal Tensor layer_norm(@Const @ByRef Tensor input, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] normalized_shape, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, double eps/*=1e-05*/, @Cast("bool") boolean cudnn_enable/*=true*/); +@Namespace("at") public static native @ByVal Tensor layer_norm(@Const @ByRef Tensor input, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] normalized_shape, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, double eps/*=1e-05*/, @Cast("bool") boolean cudnn_enable/*=true*/); @Namespace("at") public static native @ByVal Tensor layer_norm(@Const @ByRef Tensor input, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... normalized_shape); // aten::layer_norm(Tensor input, SymInt[] normalized_shape, Tensor? weight=None, Tensor? bias=None, float eps=1e-05, bool cudnn_enable=True) -> Tensor -@Namespace("at") public static native @ByVal Tensor layer_norm_symint(@Const @ByRef Tensor input, @ByVal SymIntArrayRef normalized_shape, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, double eps/*=1e-05*/, @Cast("bool") boolean cudnn_enable/*=true*/); +@Namespace("at") public static native @ByVal Tensor layer_norm_symint(@Const @ByRef Tensor input, @ByVal SymIntArrayRef normalized_shape, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, double eps/*=1e-05*/, @Cast("bool") boolean cudnn_enable/*=true*/); @Namespace("at") public static native @ByVal Tensor layer_norm_symint(@Const @ByRef Tensor input, @ByVal SymIntArrayRef normalized_shape); @@ -35705,11 +35620,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linalg_cond(Tensor self, Scalar? p=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_cond(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional p); +@Namespace("at") public static native @ByVal Tensor linalg_cond(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional p); @Namespace("at") public static native @ByVal Tensor linalg_cond(@Const @ByRef Tensor self); // aten::linalg_cond.out(Tensor self, Scalar? p=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor linalg_cond_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional p); +@Namespace("at") public static native @ByRef Tensor linalg_cond_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional p); @Namespace("at") public static native @ByRef Tensor linalg_cond_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::linalg_cond.out(Tensor self, Scalar? p=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor linalg_cond_outf(@Const @ByRef Tensor self, @Const @ByRef ScalarOptional p, @ByRef Tensor out); @@ -36224,11 +36139,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linalg_lstsq(Tensor self, Tensor b, float? rcond=None, *, str? driver=None) -> (Tensor solution, Tensor residuals, Tensor rank, Tensor singular_values) -@Namespace("at") public static native @ByVal T_TensorTensorTensorTensor_T linalg_lstsq(@Const @ByRef Tensor self, @Const @ByRef Tensor b, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional rcond, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional driver); +@Namespace("at") public static native @ByVal T_TensorTensorTensorTensor_T linalg_lstsq(@Const @ByRef Tensor self, @Const @ByRef Tensor b, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional rcond, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional driver); @Namespace("at") public static native @ByVal T_TensorTensorTensorTensor_T linalg_lstsq(@Const @ByRef Tensor self, @Const @ByRef Tensor b); // aten::linalg_lstsq.out(Tensor self, Tensor b, float? rcond=None, *, str? driver=None, Tensor(a!) solution, Tensor(b!) residuals, Tensor(c!) rank, Tensor(d!) singular_values) -> (Tensor(a!) solution, Tensor(b!) residuals, Tensor(c!) rank, Tensor(d!) singular_values) -@Namespace("at") public static native @ByVal T_TensorTensorTensorTensor_T linalg_lstsq_out(@ByRef Tensor solution, @ByRef Tensor residuals, @ByRef Tensor rank, @ByRef Tensor singular_values, @Const @ByRef Tensor self, @Const @ByRef Tensor b, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional rcond, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional driver); +@Namespace("at") public static native @ByVal T_TensorTensorTensorTensor_T linalg_lstsq_out(@ByRef Tensor solution, @ByRef Tensor residuals, @ByRef Tensor rank, @ByRef Tensor singular_values, @Const @ByRef Tensor self, @Const @ByRef Tensor b, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional rcond, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional driver); @Namespace("at") public static native @ByVal T_TensorTensorTensorTensor_T linalg_lstsq_out(@ByRef Tensor solution, @ByRef Tensor residuals, @ByRef Tensor rank, @ByRef Tensor singular_values, @Const @ByRef Tensor self, @Const @ByRef Tensor b); // aten::linalg_lstsq.out(Tensor self, Tensor b, float? rcond=None, *, str? driver=None, Tensor(a!) solution, Tensor(b!) residuals, Tensor(c!) rank, Tensor(d!) singular_values) -> (Tensor(a!) solution, Tensor(b!) residuals, Tensor(c!) rank, Tensor(d!) singular_values) @Namespace("at") public static native @ByVal T_TensorTensorTensorTensor_T linalg_lstsq_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor b, @ByVal DoubleOptional rcond, @ByVal StringViewOptional driver, @ByRef Tensor solution, @ByRef Tensor residuals, @ByRef Tensor rank, @ByRef Tensor singular_values); @@ -36479,31 +36394,31 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linalg_matrix_norm(Tensor self, Scalar ord, int[] dim=[-2,-1], bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @Const @ByRef Scalar ord); -@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::linalg_matrix_norm.out(Tensor self, Scalar ord, int[] dim=[-2,-1], bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Scalar ord); -@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::linalg_matrix_norm.out(Tensor self, Scalar ord, int[] dim=[-2,-1], bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_outf(@Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal LongArrayRef dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_outf(@Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); // aten::linalg_matrix_norm.str_ord(Tensor self, str ord='fro', int[] dim=[-2,-1], bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @StringView BytePointer ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @StringView BytePointer ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @StringView String ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); -@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @StringView BytePointer ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); -@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @StringView String ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @StringView String ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @StringView BytePointer ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_matrix_norm(@Const @ByRef Tensor self, @StringView String ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::linalg_matrix_norm.str_ord_out(Tensor self, str ord='fro', int[] dim=[-2,-1], bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView BytePointer ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView BytePointer ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView String ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); -@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView BytePointer ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); -@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView String ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView String ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView BytePointer ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView String ord/*="fro"*/, @ByVal(nullValue = "at::IntArrayRef({-2,-1})") LongArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::linalg_matrix_norm.str_ord_out(Tensor self, str ord='fro', int[] dim=[-2,-1], bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_outf(@Const @ByRef Tensor self, @StringView BytePointer ord, @ByVal LongArrayRef dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor linalg_matrix_norm_outf(@Const @ByRef Tensor self, @StringView String ord, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @@ -36573,11 +36488,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linalg_matrix_rank.atol_rtol_tensor(Tensor input, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_matrix_rank(@Const @ByRef Tensor input, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional atol, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional rtol, @Cast("bool") boolean hermitian/*=false*/); +@Namespace("at") public static native @ByVal Tensor linalg_matrix_rank(@Const @ByRef Tensor input, @Const @ByRef(nullValue = "std::optional{}") TensorOptional atol, @Const @ByRef(nullValue = "std::optional{}") TensorOptional rtol, @Cast("bool") boolean hermitian/*=false*/); @Namespace("at") public static native @ByVal Tensor linalg_matrix_rank(@Const @ByRef Tensor input); // aten::linalg_matrix_rank.atol_rtol_tensor_out(Tensor input, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor linalg_matrix_rank_out(@ByRef Tensor out, @Const @ByRef Tensor input, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional atol, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional rtol, @Cast("bool") boolean hermitian/*=false*/); +@Namespace("at") public static native @ByRef Tensor linalg_matrix_rank_out(@ByRef Tensor out, @Const @ByRef Tensor input, @Const @ByRef(nullValue = "std::optional{}") TensorOptional atol, @Const @ByRef(nullValue = "std::optional{}") TensorOptional rtol, @Cast("bool") boolean hermitian/*=false*/); @Namespace("at") public static native @ByRef Tensor linalg_matrix_rank_out(@ByRef Tensor out, @Const @ByRef Tensor input); // aten::linalg_matrix_rank.atol_rtol_tensor_out(Tensor input, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor linalg_matrix_rank_outf(@Const @ByRef Tensor input, @Const @ByRef TensorOptional atol, @Const @ByRef TensorOptional rtol, @Cast("bool") boolean hermitian, @ByRef Tensor out); @@ -36678,28 +36593,28 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linalg_norm(Tensor self, Scalar? ord=None, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_norm(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_norm(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor linalg_norm(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor linalg_norm(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_norm(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::linalg_norm.ord_str(Tensor self, str ord, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_norm(@Const @ByRef Tensor self, @StringView BytePointer ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_norm(@Const @ByRef Tensor self, @StringView BytePointer ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor linalg_norm(@Const @ByRef Tensor self, @StringView BytePointer ord); -@Namespace("at") public static native @ByVal Tensor linalg_norm(@Const @ByRef Tensor self, @StringView String ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_norm(@Const @ByRef Tensor self, @StringView String ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor linalg_norm(@Const @ByRef Tensor self, @StringView String ord); // aten::linalg_norm.out(Tensor self, Scalar? ord=None, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor linalg_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor linalg_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor linalg_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::linalg_norm.out(Tensor self, Scalar? ord=None, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor linalg_norm_outf(@Const @ByRef Tensor self, @Const @ByRef ScalarOptional ord, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor linalg_norm_outf(@Const @ByRef Tensor self, @Const @ByRef ScalarOptional ord, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); // aten::linalg_norm.ord_str_out(Tensor self, str ord, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor linalg_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView BytePointer ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView BytePointer ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor linalg_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView BytePointer ord); -@Namespace("at") public static native @ByRef Tensor linalg_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView String ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView String ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor linalg_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @StringView String ord); // aten::linalg_norm.ord_str_out(Tensor self, str ord, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor linalg_norm_outf(@Const @ByRef Tensor self, @StringView BytePointer ord, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @@ -36733,11 +36648,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linalg_pinv.atol_rtol_tensor(Tensor self, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_pinv(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional atol, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional rtol, @Cast("bool") boolean hermitian/*=false*/); +@Namespace("at") public static native @ByVal Tensor linalg_pinv(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional{}") TensorOptional atol, @Const @ByRef(nullValue = "std::optional{}") TensorOptional rtol, @Cast("bool") boolean hermitian/*=false*/); @Namespace("at") public static native @ByVal Tensor linalg_pinv(@Const @ByRef Tensor self); // aten::linalg_pinv.atol_rtol_tensor_out(Tensor self, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor linalg_pinv_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional atol, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional rtol, @Cast("bool") boolean hermitian/*=false*/); +@Namespace("at") public static native @ByRef Tensor linalg_pinv_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "std::optional{}") TensorOptional atol, @Const @ByRef(nullValue = "std::optional{}") TensorOptional rtol, @Cast("bool") boolean hermitian/*=false*/); @Namespace("at") public static native @ByRef Tensor linalg_pinv_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::linalg_pinv.atol_rtol_tensor_out(Tensor self, *, Tensor? atol=None, Tensor? rtol=None, bool hermitian=False, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor linalg_pinv_outf(@Const @ByRef Tensor self, @Const @ByRef TensorOptional atol, @Const @ByRef TensorOptional rtol, @Cast("bool") boolean hermitian, @ByRef Tensor out); @@ -36986,11 +36901,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linalg_svd(Tensor A, bool full_matrices=True, *, str? driver=None) -> (Tensor U, Tensor S, Tensor Vh) -@Namespace("at") public static native @ByVal T_TensorTensorTensor_T linalg_svd(@Const @ByRef Tensor A, @Cast("bool") boolean full_matrices/*=true*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional driver); +@Namespace("at") public static native @ByVal T_TensorTensorTensor_T linalg_svd(@Const @ByRef Tensor A, @Cast("bool") boolean full_matrices/*=true*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional driver); @Namespace("at") public static native @ByVal T_TensorTensorTensor_T linalg_svd(@Const @ByRef Tensor A); // aten::linalg_svd.U(Tensor A, bool full_matrices=True, *, str? driver=None, Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) -> (Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) -@Namespace("at") public static native @ByVal T_TensorTensorTensor_T linalg_svd_out(@ByRef Tensor U, @ByRef Tensor S, @ByRef Tensor Vh, @Const @ByRef Tensor A, @Cast("bool") boolean full_matrices/*=true*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional driver); +@Namespace("at") public static native @ByVal T_TensorTensorTensor_T linalg_svd_out(@ByRef Tensor U, @ByRef Tensor S, @ByRef Tensor Vh, @Const @ByRef Tensor A, @Cast("bool") boolean full_matrices/*=true*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional driver); @Namespace("at") public static native @ByVal T_TensorTensorTensor_T linalg_svd_out(@ByRef Tensor U, @ByRef Tensor S, @ByRef Tensor Vh, @Const @ByRef Tensor A); // aten::linalg_svd.U(Tensor A, bool full_matrices=True, *, str? driver=None, Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) -> (Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) @Namespace("at") public static native @ByVal T_TensorTensorTensor_T linalg_svd_outf(@Const @ByRef Tensor A, @Cast("bool") boolean full_matrices, @ByVal StringViewOptional driver, @ByRef Tensor U, @ByRef Tensor S, @ByRef Tensor Vh); @@ -37023,11 +36938,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linalg_svdvals(Tensor A, *, str? driver=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_svdvals(@Const @ByRef Tensor A, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional driver); +@Namespace("at") public static native @ByVal Tensor linalg_svdvals(@Const @ByRef Tensor A, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional driver); @Namespace("at") public static native @ByVal Tensor linalg_svdvals(@Const @ByRef Tensor A); // aten::linalg_svdvals.out(Tensor A, *, str? driver=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor linalg_svdvals_out(@ByRef Tensor out, @Const @ByRef Tensor A, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional driver); +@Namespace("at") public static native @ByRef Tensor linalg_svdvals_out(@ByRef Tensor out, @Const @ByRef Tensor A, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional driver); @Namespace("at") public static native @ByRef Tensor linalg_svdvals_out(@ByRef Tensor out, @Const @ByRef Tensor A); // aten::linalg_svdvals.out(Tensor A, *, str? driver=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor linalg_svdvals_outf(@Const @ByRef Tensor A, @ByVal StringViewOptional driver, @ByRef Tensor out); @@ -37097,14 +37012,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linalg_tensorsolve(Tensor self, Tensor other, int[]? dims=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_tensorsolve(@Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dims); +@Namespace("at") public static native @ByVal Tensor linalg_tensorsolve(@Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dims); @Namespace("at") public static native @ByVal Tensor linalg_tensorsolve(@Const @ByRef Tensor self, @Const @ByRef Tensor other); -@Namespace("at") public static native @ByVal Tensor linalg_tensorsolve(@Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dims); +@Namespace("at") public static native @ByVal Tensor linalg_tensorsolve(@Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dims); // aten::linalg_tensorsolve.out(Tensor self, Tensor other, int[]? dims=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor linalg_tensorsolve_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dims); +@Namespace("at") public static native @ByRef Tensor linalg_tensorsolve_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dims); @Namespace("at") public static native @ByRef Tensor linalg_tensorsolve_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor other); -@Namespace("at") public static native @ByRef Tensor linalg_tensorsolve_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dims); +@Namespace("at") public static native @ByRef Tensor linalg_tensorsolve_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dims); // aten::linalg_tensorsolve.out(Tensor self, Tensor other, int[]? dims=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor linalg_tensorsolve_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal LongArrayRefOptional dims, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor linalg_tensorsolve_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor other, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dims, @ByRef Tensor out); @@ -37137,12 +37052,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linalg_vander(Tensor x, *, SymInt? N=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_vander(@Const @ByRef Tensor x, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional N); +@Namespace("at") public static native @ByVal Tensor linalg_vander(@Const @ByRef Tensor x, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional N); @Namespace("at") public static native @ByVal Tensor linalg_vander(@Const @ByRef Tensor x); // aten::linalg_vander(Tensor x, *, SymInt? N=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_vander_symint(@Const @ByRef Tensor x, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional N); +@Namespace("at") public static native @ByVal Tensor linalg_vander_symint(@Const @ByRef Tensor x, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional N); @Namespace("at") public static native @ByVal Tensor linalg_vander_symint(@Const @ByRef Tensor x); @@ -37211,14 +37126,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linalg_vector_norm(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor linalg_vector_norm(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(2)") Scalar ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_vector_norm(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(2)") Scalar ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor linalg_vector_norm(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor linalg_vector_norm(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(2)") Scalar ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor linalg_vector_norm(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(2)") Scalar ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::linalg_vector_norm.out(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor linalg_vector_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(2)") Scalar ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_vector_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(2)") Scalar ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor linalg_vector_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor linalg_vector_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(2)") Scalar ord, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor linalg_vector_norm_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(2)") Scalar ord, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::linalg_vector_norm.out(Tensor self, Scalar ord=2, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor linalg_vector_norm_outf(@Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor linalg_vector_norm_outf(@Const @ByRef Tensor self, @Const @ByRef Scalar ord, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @@ -37251,11 +37166,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::linear(Tensor input, Tensor weight, Tensor? bias=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor linear(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias); +@Namespace("at") public static native @ByVal Tensor linear(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias); @Namespace("at") public static native @ByVal Tensor linear(@Const @ByRef Tensor input, @Const @ByRef Tensor weight); // aten::linear.out(Tensor input, Tensor weight, Tensor? bias=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor linear_out(@ByRef Tensor out, @Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias); +@Namespace("at") public static native @ByRef Tensor linear_out(@ByRef Tensor out, @Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias); @Namespace("at") public static native @ByRef Tensor linear_out(@ByRef Tensor out, @Const @ByRef Tensor input, @Const @ByRef Tensor weight); // aten::linear.out(Tensor input, Tensor weight, Tensor? bias=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor linear_outf(@Const @ByRef Tensor input, @Const @ByRef Tensor weight, @Const @ByRef TensorOptional bias, @ByRef Tensor out); @@ -37546,13 +37461,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::log_normal.out(Tensor self, float mean=1, float std=2, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor log_normal_out(@ByRef Tensor out, @Const @ByRef Tensor self, double mean/*=1*/, double std/*=2*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor log_normal_out(@ByRef Tensor out, @Const @ByRef Tensor self, double mean/*=1*/, double std/*=2*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor log_normal_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::log_normal.out(Tensor self, float mean=1, float std=2, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor log_normal_outf(@Const @ByRef Tensor self, double mean, double std, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::log_normal(Tensor self, float mean=1, float std=2, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor log_normal(@Const @ByRef Tensor self, double mean/*=1*/, double std/*=2*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor log_normal(@Const @ByRef Tensor self, double mean/*=1*/, double std/*=2*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor log_normal(@Const @ByRef Tensor self); @@ -37688,17 +37603,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::log_softmax.int(Tensor self, int dim, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor log_softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor log_softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor log_softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim); // aten::log_softmax.int_out(Tensor self, int dim, ScalarType? dtype=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor log_softmax_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor log_softmax_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor log_softmax_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim); // aten::log_softmax.int_out(Tensor self, int dim, ScalarType? dtype=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor log_softmax_outf(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); // aten::log_softmax.Dimname(Tensor self, Dimname dim, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor log_softmax(@Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor log_softmax(@Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor log_softmax(@Const @ByRef Tensor self, @ByVal Dimname dim); @@ -38012,15 +37927,15 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::logit(Tensor self, float? eps=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor logit(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional eps); +@Namespace("at") public static native @ByVal Tensor logit(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional eps); @Namespace("at") public static native @ByVal Tensor logit(@Const @ByRef Tensor self); // aten::logit_(Tensor(a!) self, float? eps=None) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor logit_(@ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional eps); +@Namespace("at") public static native @ByRef Tensor logit_(@ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional eps); @Namespace("at") public static native @ByRef Tensor logit_(@ByRef Tensor self); // aten::logit.out(Tensor self, float? eps=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor logit_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional eps); +@Namespace("at") public static native @ByRef Tensor logit_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional eps); @Namespace("at") public static native @ByRef Tensor logit_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::logit.out(Tensor self, float? eps=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor logit_outf(@Const @ByRef Tensor self, @ByVal DoubleOptional eps, @ByRef Tensor out); @@ -38053,13 +37968,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::logit_backward.grad_input(Tensor grad_output, Tensor self, float? eps=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor logit_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional eps); +@Namespace("at") public static native @ByRef Tensor logit_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional eps); @Namespace("at") public static native @ByRef Tensor logit_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @Const @ByRef Tensor self); // aten::logit_backward.grad_input(Tensor grad_output, Tensor self, float? eps=None, *, Tensor(a!) grad_input) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor logit_backward_outf(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @ByVal DoubleOptional eps, @ByRef Tensor grad_input); // aten::logit_backward(Tensor grad_output, Tensor self, float? eps=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor logit_backward(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional eps); +@Namespace("at") public static native @ByVal Tensor logit_backward(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional eps); @Namespace("at") public static native @ByVal Tensor logit_backward(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self); @@ -38300,9 +38215,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::lstm_cell(Tensor input, Tensor[] hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor, Tensor) -@Namespace("at") public static native @ByVal T_TensorTensor_T lstm_cell(@Const @ByRef Tensor input, @ByVal TensorArrayRef hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional b_ih, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional b_hh); +@Namespace("at") public static native @ByVal T_TensorTensor_T lstm_cell(@Const @ByRef Tensor input, @ByVal TensorArrayRef hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh, @Const @ByRef(nullValue = "std::optional{}") TensorOptional b_ih, @Const @ByRef(nullValue = "std::optional{}") TensorOptional b_hh); @Namespace("at") public static native @ByVal T_TensorTensor_T lstm_cell(@Const @ByRef Tensor input, @ByVal TensorArrayRef hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh); -@Namespace("at") public static native @ByVal T_TensorTensor_T lstm_cell(@Const @ByRef Tensor input, @ByVal TensorVector hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional b_ih, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional b_hh); +@Namespace("at") public static native @ByVal T_TensorTensor_T lstm_cell(@Const @ByRef Tensor input, @ByVal TensorVector hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh, @Const @ByRef(nullValue = "std::optional{}") TensorOptional b_ih, @Const @ByRef(nullValue = "std::optional{}") TensorOptional b_hh); @Namespace("at") public static native @ByVal T_TensorTensor_T lstm_cell(@Const @ByRef Tensor input, @ByVal TensorVector hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh); @@ -39488,34 +39403,34 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::mean(Tensor self, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self); // aten::mean.dim(Tensor self, int[1]? dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim); -@Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dim); // aten::mean.out(Tensor self, int[1]? dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim); -@Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dim); // aten::mean.out(Tensor self, int[1]? dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor mean_outf(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor mean_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); // aten::mean.names_dim(Tensor self, Dimname[1] dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim); -@Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor mean(@Const @ByRef Tensor self, @ByVal DimnameVector dim); // aten::mean.names_out(Tensor self, Dimname[1] dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim); -@Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor mean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim); // aten::mean.names_out(Tensor self, Dimname[1] dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor mean_outf(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @@ -40314,11 +40229,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::mkldnn_linear(Tensor self, Tensor weight, Tensor? bias=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor mkldnn_linear(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias); +@Namespace("at") public static native @ByVal Tensor mkldnn_linear(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias); @Namespace("at") public static native @ByVal Tensor mkldnn_linear(@Const @ByRef Tensor self, @Const @ByRef Tensor weight); // aten::mkldnn_linear.out(Tensor self, Tensor weight, Tensor? bias=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor mkldnn_linear_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias); +@Namespace("at") public static native @ByRef Tensor mkldnn_linear_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias); @Namespace("at") public static native @ByRef Tensor mkldnn_linear_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight); // aten::mkldnn_linear.out(Tensor self, Tensor weight, Tensor? bias=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor mkldnn_linear_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @Const @ByRef TensorOptional bias, @ByRef Tensor out); @@ -40627,24 +40542,24 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::mkldnn_reorder_conv2d_weight(Tensor self, SymInt[2] padding=0, SymInt[2] stride=1, SymInt[2] dilation=1, SymInt groups=1, SymInt[]? input_size=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional input_size); +@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional input_size); @Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); -@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional input_size); -@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); +@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); +@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional input_size); +@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); // aten::mkldnn_reorder_conv2d_weight(Tensor self, SymInt[2] padding=0, SymInt[2] stride=1, SymInt[2] dilation=1, SymInt groups=1, SymInt[]? input_size=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional input_size); +@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional input_size); @Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv2d_weight_symint(@Const @ByRef Tensor self); // aten::mkldnn_reorder_conv2d_weight.out(Tensor self, SymInt[2] padding=0, SymInt[2] stride=1, SymInt[2] dilation=1, SymInt groups=1, SymInt[]? input_size=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional input_size); +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional input_size); @Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); -@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional input_size); -@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional input_size); +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); // aten::mkldnn_reorder_conv2d_weight.out(Tensor self, SymInt[2] padding=0, SymInt[2] stride=1, SymInt[2] dilation=1, SymInt groups=1, SymInt[]? input_size=None, *, Tensor(a!) out) -> Tensor(a!) @@ -40655,7 +40570,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::mkldnn_reorder_conv2d_weight.out(Tensor self, SymInt[2] padding=0, SymInt[2] stride=1, SymInt[2] dilation=1, SymInt groups=1, SymInt[]? input_size=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional input_size); +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional input_size); @Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv2d_weight_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -40690,35 +40605,41 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include -// aten::mkldnn_reorder_conv3d_weight(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv3d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/); +// aten::mkldnn_reorder_conv3d_weight(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1, SymInt[]? input_size=None) -> Tensor +@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv3d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional input_size); @Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv3d_weight(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv3d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/); +@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv3d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); +@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv3d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional input_size); +@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv3d_weight(@Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); -// aten::mkldnn_reorder_conv3d_weight(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv3d_weight_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups); +// aten::mkldnn_reorder_conv3d_weight(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1, SymInt[]? input_size=None) -> Tensor +@Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv3d_weight_symint(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional input_size); @Namespace("at") public static native @ByVal Tensor mkldnn_reorder_conv3d_weight_symint(@Const @ByRef Tensor self); -// aten::mkldnn_reorder_conv3d_weight.out(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/); +// aten::mkldnn_reorder_conv3d_weight.out(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1, SymInt[]? input_size=None, *, Tensor(a!) out) -> Tensor(a!) +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional input_size); @Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/); +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional input_size); +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation, @Cast("int64_t") long groups/*=1*/, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... input_size); -// aten::mkldnn_reorder_conv3d_weight.out(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_outf(@Const @ByRef Tensor self, @ByVal LongArrayRef padding, @ByVal LongArrayRef stride, @ByVal LongArrayRef dilation, @Cast("int64_t") long groups, @ByRef Tensor out); -@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups, @ByRef Tensor out); +// aten::mkldnn_reorder_conv3d_weight.out(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1, SymInt[]? input_size=None, *, Tensor(a!) out) -> Tensor(a!) +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_outf(@Const @ByRef Tensor self, @ByVal LongArrayRef padding, @ByVal LongArrayRef stride, @ByVal LongArrayRef dilation, @Cast("int64_t") long groups, @ByVal LongArrayRefOptional input_size, @ByRef Tensor out); +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] input_size, @ByRef Tensor out); +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dilation, @Cast("int64_t") long groups, @ByVal LongArrayRefOptional input_size, @ByRef Tensor out); +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_outf(@Const @ByRef Tensor self, @ByVal LongArrayRef padding, @ByVal LongArrayRef stride, @ByVal LongArrayRef dilation, @Cast("int64_t") long groups, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] input_size, @ByRef Tensor out); -// aten::mkldnn_reorder_conv3d_weight.out(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups); +// aten::mkldnn_reorder_conv3d_weight.out(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1, SymInt[]? input_size=None, *, Tensor(a!) out) -> Tensor(a!) +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation, @ByVal(nullValue = "c10::SymInt(1)") SymInt groups, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional input_size); @Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); -// aten::mkldnn_reorder_conv3d_weight.out(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_symint_outf(@Const @ByRef Tensor self, @ByVal SymIntArrayRef padding, @ByVal SymIntArrayRef stride, @ByVal SymIntArrayRef dilation, @ByVal SymInt groups, @ByRef Tensor out); +// aten::mkldnn_reorder_conv3d_weight.out(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1, SymInt[]? input_size=None, *, Tensor(a!) out) -> Tensor(a!) +@Namespace("at") public static native @ByRef Tensor mkldnn_reorder_conv3d_weight_symint_outf(@Const @ByRef Tensor self, @ByVal SymIntArrayRef padding, @ByVal SymIntArrayRef stride, @ByVal SymIntArrayRef dilation, @ByVal SymInt groups, @ByVal SymIntArrayRefOptional input_size, @ByRef Tensor out); @@ -41233,13 +41154,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::multi_margin_loss.out(Tensor self, Tensor target, Scalar p=1, Scalar margin=1, Tensor? weight=None, int reduction=Mean, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor multi_margin_loss_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar p, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar margin, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); +@Namespace("at") public static native @ByRef Tensor multi_margin_loss_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar p, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar margin, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); @Namespace("at") public static native @ByRef Tensor multi_margin_loss_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target); // aten::multi_margin_loss.out(Tensor self, Tensor target, Scalar p=1, Scalar margin=1, Tensor? weight=None, int reduction=Mean, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor multi_margin_loss_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef Scalar p, @Const @ByRef Scalar margin, @Const @ByRef TensorOptional weight, @Cast("int64_t") long reduction, @ByRef Tensor out); // aten::multi_margin_loss(Tensor self, Tensor target, Scalar p=1, Scalar margin=1, Tensor? weight=None, int reduction=Mean) -> Tensor -@Namespace("at") public static native @ByVal Tensor multi_margin_loss(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar p, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar margin, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); +@Namespace("at") public static native @ByVal Tensor multi_margin_loss(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar p, @Const @ByRef(nullValue = "at::Scalar(1)") Scalar margin, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); @Namespace("at") public static native @ByVal Tensor multi_margin_loss(@Const @ByRef Tensor self, @Const @ByRef Tensor target); @@ -41270,13 +41191,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::multi_margin_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Scalar p, Scalar margin, Tensor? weight=None, int reduction=Mean, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor multi_margin_loss_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef Scalar p, @Const @ByRef Scalar margin, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); +@Namespace("at") public static native @ByRef Tensor multi_margin_loss_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef Scalar p, @Const @ByRef Scalar margin, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); @Namespace("at") public static native @ByRef Tensor multi_margin_loss_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef Scalar p, @Const @ByRef Scalar margin); // aten::multi_margin_loss_backward.grad_input(Tensor grad_output, Tensor self, Tensor target, Scalar p, Scalar margin, Tensor? weight=None, int reduction=Mean, *, Tensor(a!) grad_input) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor multi_margin_loss_backward_outf(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef Scalar p, @Const @ByRef Scalar margin, @Const @ByRef TensorOptional weight, @Cast("int64_t") long reduction, @ByRef Tensor grad_input); // aten::multi_margin_loss_backward(Tensor grad_output, Tensor self, Tensor target, Scalar p, Scalar margin, Tensor? weight=None, int reduction=Mean) -> Tensor -@Namespace("at") public static native @ByVal Tensor multi_margin_loss_backward(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef Scalar p, @Const @ByRef Scalar margin, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); +@Namespace("at") public static native @ByVal Tensor multi_margin_loss_backward(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef Scalar p, @Const @ByRef Scalar margin, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/); @Namespace("at") public static native @ByVal Tensor multi_margin_loss_backward(@Const @ByRef Tensor grad_output, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef Scalar p, @Const @ByRef Scalar margin); @@ -41414,13 +41335,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::multinomial.out(Tensor self, int num_samples, bool replacement=False, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor multinomial_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long num_samples, @Cast("bool") boolean replacement/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor multinomial_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long num_samples, @Cast("bool") boolean replacement/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor multinomial_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long num_samples); // aten::multinomial.out(Tensor self, int num_samples, bool replacement=False, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor multinomial_outf(@Const @ByRef Tensor self, @Cast("int64_t") long num_samples, @Cast("bool") boolean replacement, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::multinomial(Tensor self, int num_samples, bool replacement=False, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor multinomial(@Const @ByRef Tensor self, @Cast("int64_t") long num_samples, @Cast("bool") boolean replacement/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor multinomial(@Const @ByRef Tensor self, @Cast("int64_t") long num_samples, @Cast("bool") boolean replacement/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor multinomial(@Const @ByRef Tensor self, @Cast("int64_t") long num_samples); @@ -41559,15 +41480,15 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nan_to_num(Tensor self, float? nan=None, float? posinf=None, float? neginf=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor nan_to_num(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional nan, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional posinf, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional neginf); +@Namespace("at") public static native @ByVal Tensor nan_to_num(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional nan, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional posinf, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional neginf); @Namespace("at") public static native @ByVal Tensor nan_to_num(@Const @ByRef Tensor self); // aten::nan_to_num_(Tensor(a!) self, float? nan=None, float? posinf=None, float? neginf=None) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor nan_to_num_(@ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional nan, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional posinf, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional neginf); +@Namespace("at") public static native @ByRef Tensor nan_to_num_(@ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional nan, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional posinf, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional neginf); @Namespace("at") public static native @ByRef Tensor nan_to_num_(@ByRef Tensor self); // aten::nan_to_num.out(Tensor self, float? nan=None, float? posinf=None, float? neginf=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor nan_to_num_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional nan, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional posinf, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional neginf); +@Namespace("at") public static native @ByRef Tensor nan_to_num_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional nan, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional posinf, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional neginf); @Namespace("at") public static native @ByRef Tensor nan_to_num_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::nan_to_num.out(Tensor self, float? nan=None, float? posinf=None, float? neginf=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor nan_to_num_outf(@Const @ByRef Tensor self, @ByVal DoubleOptional nan, @ByVal DoubleOptional posinf, @ByVal DoubleOptional neginf, @ByRef Tensor out); @@ -41600,14 +41521,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nanmean(Tensor self, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor nanmean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor nanmean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor nanmean(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor nanmean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor nanmean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::nanmean.out(Tensor self, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor nanmean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor nanmean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor nanmean_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor nanmean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor nanmean_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::nanmean.out(Tensor self, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor nanmean_outf(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor nanmean_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @@ -41695,27 +41616,27 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nanquantile(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation='linear') -> Tensor -@Namespace("at") public static native @ByVal Tensor nanquantile(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); +@Namespace("at") public static native @ByVal Tensor nanquantile(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); @Namespace("at") public static native @ByVal Tensor nanquantile(@Const @ByRef Tensor self, @Const @ByRef Tensor q); -@Namespace("at") public static native @ByVal Tensor nanquantile(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); +@Namespace("at") public static native @ByVal Tensor nanquantile(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); // aten::nanquantile.out(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation='linear', Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor nanquantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); +@Namespace("at") public static native @ByRef Tensor nanquantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); @Namespace("at") public static native @ByRef Tensor nanquantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor q); -@Namespace("at") public static native @ByRef Tensor nanquantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); +@Namespace("at") public static native @ByRef Tensor nanquantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); // aten::nanquantile.out(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation='linear', Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor nanquantile_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal LongOptional dim, @Cast("bool") boolean keepdim, @StringView BytePointer interpolation, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor nanquantile_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal LongOptional dim, @Cast("bool") boolean keepdim, @StringView String interpolation, @ByRef Tensor out); // aten::nanquantile.scalar(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation='linear') -> Tensor -@Namespace("at") public static native @ByVal Tensor nanquantile(@Const @ByRef Tensor self, double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); +@Namespace("at") public static native @ByVal Tensor nanquantile(@Const @ByRef Tensor self, double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); @Namespace("at") public static native @ByVal Tensor nanquantile(@Const @ByRef Tensor self, double q); -@Namespace("at") public static native @ByVal Tensor nanquantile(@Const @ByRef Tensor self, double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); +@Namespace("at") public static native @ByVal Tensor nanquantile(@Const @ByRef Tensor self, double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); // aten::nanquantile.scalar_out(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation='linear', Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor nanquantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); +@Namespace("at") public static native @ByRef Tensor nanquantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); @Namespace("at") public static native @ByRef Tensor nanquantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, double q); -@Namespace("at") public static native @ByRef Tensor nanquantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); +@Namespace("at") public static native @ByRef Tensor nanquantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); // aten::nanquantile.scalar_out(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation='linear', Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor nanquantile_outf(@Const @ByRef Tensor self, double q, @ByVal LongOptional dim, @Cast("bool") boolean keepdim, @StringView BytePointer interpolation, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor nanquantile_outf(@Const @ByRef Tensor self, double q, @ByVal LongOptional dim, @Cast("bool") boolean keepdim, @StringView String interpolation, @ByRef Tensor out); @@ -41748,14 +41669,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nansum(Tensor self, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor nansum(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor nansum(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor nansum(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor nansum(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor nansum(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::nansum.out(Tensor self, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor nansum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor nansum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor nansum_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor nansum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor nansum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); // aten::nansum.out(Tensor self, int[1]? dim=None, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor nansum_outf(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor nansum_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @@ -42434,9 +42355,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nested_to_padded_tensor(Tensor self, float padding, int[]? output_size=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor nested_to_padded_tensor(@Const @ByRef Tensor self, double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional output_size); +@Namespace("at") public static native @ByVal Tensor nested_to_padded_tensor(@Const @ByRef Tensor self, double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional output_size); @Namespace("at") public static native @ByVal Tensor nested_to_padded_tensor(@Const @ByRef Tensor self, double padding); -@Namespace("at") public static native @ByVal Tensor nested_to_padded_tensor(@Const @ByRef Tensor self, double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); +@Namespace("at") public static native @ByVal Tensor nested_to_padded_tensor(@Const @ByRef Tensor self, double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); @@ -42741,7 +42662,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nll_loss.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor nll_loss_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/); +@Namespace("at") public static native @ByRef Tensor nll_loss_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/); @Namespace("at") public static native @ByRef Tensor nll_loss_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target); @@ -42750,7 +42671,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nll_loss.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor nll_loss_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index); +@Namespace("at") public static native @ByRef Tensor nll_loss_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index); @Namespace("at") public static native @ByRef Tensor nll_loss_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target); @@ -42759,12 +42680,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nll_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100) -> Tensor -@Namespace("at") public static native @ByVal Tensor nll_loss(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/); +@Namespace("at") public static native @ByVal Tensor nll_loss(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/); @Namespace("at") public static native @ByVal Tensor nll_loss(@Const @ByRef Tensor self, @Const @ByRef Tensor target); // aten::nll_loss(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100) -> Tensor -@Namespace("at") public static native @ByVal Tensor nll_loss_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index); +@Namespace("at") public static native @ByVal Tensor nll_loss_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index); @Namespace("at") public static native @ByVal Tensor nll_loss_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target); @@ -42796,7 +42717,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nll_loss2d.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor nll_loss2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/); +@Namespace("at") public static native @ByRef Tensor nll_loss2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/); @Namespace("at") public static native @ByRef Tensor nll_loss2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target); @@ -42805,7 +42726,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nll_loss2d.out(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor nll_loss2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index); +@Namespace("at") public static native @ByRef Tensor nll_loss2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index); @Namespace("at") public static native @ByRef Tensor nll_loss2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor target); @@ -42814,12 +42735,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nll_loss2d(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100) -> Tensor -@Namespace("at") public static native @ByVal Tensor nll_loss2d(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/); +@Namespace("at") public static native @ByVal Tensor nll_loss2d(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/); @Namespace("at") public static native @ByVal Tensor nll_loss2d(@Const @ByRef Tensor self, @Const @ByRef Tensor target); // aten::nll_loss2d(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100) -> Tensor -@Namespace("at") public static native @ByVal Tensor nll_loss2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index); +@Namespace("at") public static native @ByVal Tensor nll_loss2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index); @Namespace("at") public static native @ByVal Tensor nll_loss2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target); @@ -43055,12 +42976,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::nll_loss_nd(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100) -> Tensor -@Namespace("at") public static native @ByVal Tensor nll_loss_nd(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/); +@Namespace("at") public static native @ByVal Tensor nll_loss_nd(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @Cast("int64_t") long ignore_index/*=-100*/); @Namespace("at") public static native @ByVal Tensor nll_loss_nd(@Const @ByRef Tensor self, @Const @ByRef Tensor target); // aten::nll_loss_nd(Tensor self, Tensor target, Tensor? weight=None, int reduction=Mean, SymInt ignore_index=-100) -> Tensor -@Namespace("at") public static native @ByVal Tensor nll_loss_nd_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index); +@Namespace("at") public static native @ByVal Tensor nll_loss_nd_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @Cast("int64_t") long reduction/*=at::Reduction::Mean*/, @ByVal(nullValue = "c10::SymInt(-100)") SymInt ignore_index); @Namespace("at") public static native @ByVal Tensor nll_loss_nd_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor target); @@ -43322,40 +43243,40 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::normal_functional(Tensor self, float mean=0, float std=1, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor normal_functional(@Const @ByRef Tensor self, double mean/*=0*/, double std/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor normal_functional(@Const @ByRef Tensor self, double mean/*=0*/, double std/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor normal_functional(@Const @ByRef Tensor self); // aten::normal.Tensor_float_out(Tensor mean, float std=1, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, @Const @ByRef Tensor mean, double std/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, @Const @ByRef Tensor mean, double std/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); // aten::normal.Tensor_float_out(Tensor mean, float std=1, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor normal_outf(@Const @ByRef Tensor mean, double std, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::normal.Tensor_float(Tensor mean, float std=1, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor normal(@Const @ByRef Tensor mean, double std/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor normal(@Const @ByRef Tensor mean, double std/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor normal(@Const @ByRef Tensor mean); // aten::normal.float_Tensor_out(float mean, Tensor std, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, double mean, @Const @ByRef Tensor std, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, double mean, @Const @ByRef Tensor std, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); // aten::normal.float_Tensor_out(float mean, Tensor std, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor normal_outf(double mean, @Const @ByRef Tensor std, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::normal.float_Tensor(float mean, Tensor std, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor normal(double mean, @Const @ByRef Tensor std, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor normal(double mean, @Const @ByRef Tensor std, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor normal(double mean, @Const @ByRef Tensor std); // aten::normal.Tensor_Tensor_out(Tensor mean, Tensor std, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, @Const @ByRef Tensor mean, @Const @ByRef Tensor std, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, @Const @ByRef Tensor mean, @Const @ByRef Tensor std, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); // aten::normal.Tensor_Tensor_out(Tensor mean, Tensor std, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor normal_outf(@Const @ByRef Tensor mean, @Const @ByRef Tensor std, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::normal.Tensor_Tensor(Tensor mean, Tensor std, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor normal(@Const @ByRef Tensor mean, @Const @ByRef Tensor std, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor normal(@Const @ByRef Tensor mean, @Const @ByRef Tensor std, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor normal(@Const @ByRef Tensor mean, @Const @ByRef Tensor std); // aten::normal.float_float(float mean, float std, SymInt[] size, *, Generator? generator=None, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor normal(double mean, double std, @ByVal LongArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); +@Namespace("at") public static native @ByVal Tensor normal(double mean, double std, @ByVal LongArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("at") public static native @ByVal Tensor normal(double mean, double std, @ByVal LongArrayRef size); -@Namespace("at") public static native @ByVal Tensor normal(double mean, double std, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); +@Namespace("at") public static native @ByVal Tensor normal(double mean, double std, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("at") public static native @ByVal Tensor normal(double mean, double std, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); @@ -43365,7 +43286,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::normal.float_float(float mean, float std, SymInt[] size, *, Generator? generator=None, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor normal_symint(double mean, double std, @ByVal SymIntArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); +@Namespace("at") public static native @ByVal Tensor normal_symint(double mean, double std, @ByVal SymIntArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("at") public static native @ByVal Tensor normal_symint(double mean, double std, @ByVal SymIntArrayRef size); @@ -43374,8 +43295,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::normal.float_float_out(float mean, float std, SymInt[] size, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, double mean, double std, @ByVal LongArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); -@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, double mean, double std, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, double mean, double std, @ByVal LongArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, double mean, double std, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @@ -43385,7 +43306,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::normal.float_float_out(float mean, float std, SymInt[] size, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor normal_symint_out(@ByRef Tensor out, double mean, double std, @ByVal SymIntArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor normal_symint_out(@ByRef Tensor out, double mean, double std, @ByVal SymIntArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor normal_symint_out(@ByRef Tensor out, double mean, double std, @ByVal SymIntArrayRef size); @@ -43395,7 +43316,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::normal.out(Tensor self, float mean=0, float std=1, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, @Const @ByRef Tensor self, double mean/*=0*/, double std/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor normal_out(@ByRef Tensor out, @Const @ByRef Tensor self, double mean/*=0*/, double std/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); // aten::normal.out(Tensor self, float mean=0, float std=1, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor normal_outf(@Const @ByRef Tensor self, double mean, double std, @ByVal GeneratorOptional generator, @ByRef Tensor out); @@ -43663,13 +43584,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::ones_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor ones_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor ones_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor ones_like(@Const @ByRef Tensor self); // aten::ones_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor @Namespace("at") public static native @ByVal Tensor ones_like(@Const @ByRef Tensor self, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal MemoryFormatOptional memory_format); // aten::ones_like.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor ones_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor ones_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor ones_like_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::ones_like.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor ones_like_outf(@Const @ByRef Tensor self, @ByVal MemoryFormatOptional memory_format, @ByRef Tensor out); @@ -43870,18 +43791,18 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::pad(Tensor self, SymInt[] pad, str mode="constant", float? value=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor pad(@Const @ByRef Tensor self, @ByVal LongArrayRef pad, @StringView BytePointer mode/*="constant"*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional value); +@Namespace("at") public static native @ByVal Tensor pad(@Const @ByRef Tensor self, @ByVal LongArrayRef pad, @StringView BytePointer mode/*="constant"*/, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional value); @Namespace("at") public static native @ByVal Tensor pad(@Const @ByRef Tensor self, @ByVal LongArrayRef pad); -@Namespace("at") public static native @ByVal Tensor pad(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] pad, @StringView String mode/*="constant"*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional value); +@Namespace("at") public static native @ByVal Tensor pad(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] pad, @StringView String mode/*="constant"*/, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional value); @Namespace("at") public static native @ByVal Tensor pad(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... pad); -@Namespace("at") public static native @ByVal Tensor pad(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] pad, @StringView BytePointer mode/*="constant"*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional value); -@Namespace("at") public static native @ByVal Tensor pad(@Const @ByRef Tensor self, @ByVal LongArrayRef pad, @StringView String mode/*="constant"*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional value); +@Namespace("at") public static native @ByVal Tensor pad(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] pad, @StringView BytePointer mode/*="constant"*/, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional value); +@Namespace("at") public static native @ByVal Tensor pad(@Const @ByRef Tensor self, @ByVal LongArrayRef pad, @StringView String mode/*="constant"*/, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional value); // aten::pad(Tensor self, SymInt[] pad, str mode="constant", float? value=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor pad_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef pad, @StringView BytePointer mode/*="constant"*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional value); +@Namespace("at") public static native @ByVal Tensor pad_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef pad, @StringView BytePointer mode/*="constant"*/, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional value); @Namespace("at") public static native @ByVal Tensor pad_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef pad); -@Namespace("at") public static native @ByVal Tensor pad_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef pad, @StringView String mode/*="constant"*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional value); +@Namespace("at") public static native @ByVal Tensor pad_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef pad, @StringView String mode/*="constant"*/, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional value); @@ -44205,11 +44126,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::poisson(Tensor self, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor poisson(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor poisson(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor poisson(@Const @ByRef Tensor self); // aten::poisson.out(Tensor self, Generator? generator=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor poisson_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor poisson_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor poisson_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::poisson.out(Tensor self, Generator? generator=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor poisson_outf(@Const @ByRef Tensor self, @ByVal GeneratorOptional generator, @ByRef Tensor out); @@ -44453,31 +44374,31 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::prod(Tensor self, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor prod(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor prod(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor prod(@Const @ByRef Tensor self); // aten::prod.dim_int(Tensor self, int dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor prod(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor prod(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor prod(@Const @ByRef Tensor self, @Cast("int64_t") long dim); // aten::prod.int_out(Tensor self, int dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor prod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor prod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor prod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim); // aten::prod.int_out(Tensor self, int dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor prod_outf(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); // aten::prod.dim_Dimname(Tensor self, Dimname dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor prod(@Const @ByRef Tensor self, @ByVal Dimname dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor prod(@Const @ByRef Tensor self, @ByVal Dimname dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor prod(@Const @ByRef Tensor self, @ByVal Dimname dim); // aten::prod.Dimname_out(Tensor self, Dimname dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor prod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal Dimname dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor prod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal Dimname dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor prod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal Dimname dim); // aten::prod.Dimname_out(Tensor self, Dimname dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor prod_outf(@Const @ByRef Tensor self, @ByVal Dimname dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); // aten::prod.out(Tensor self, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor prod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor prod_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor prod_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::prod.out(Tensor self, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor prod_outf(@Const @ByRef Tensor self, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @@ -44802,27 +44723,27 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::quantile(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation='linear') -> Tensor -@Namespace("at") public static native @ByVal Tensor quantile(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); +@Namespace("at") public static native @ByVal Tensor quantile(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); @Namespace("at") public static native @ByVal Tensor quantile(@Const @ByRef Tensor self, @Const @ByRef Tensor q); -@Namespace("at") public static native @ByVal Tensor quantile(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); +@Namespace("at") public static native @ByVal Tensor quantile(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); // aten::quantile.out(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation='linear', Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor quantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); +@Namespace("at") public static native @ByRef Tensor quantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); @Namespace("at") public static native @ByRef Tensor quantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor q); -@Namespace("at") public static native @ByRef Tensor quantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); +@Namespace("at") public static native @ByRef Tensor quantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); // aten::quantile.out(Tensor self, Tensor q, int? dim=None, bool keepdim=False, *, str interpolation='linear', Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor quantile_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal LongOptional dim, @Cast("bool") boolean keepdim, @StringView BytePointer interpolation, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor quantile_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor q, @ByVal LongOptional dim, @Cast("bool") boolean keepdim, @StringView String interpolation, @ByRef Tensor out); // aten::quantile.scalar(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation='linear') -> Tensor -@Namespace("at") public static native @ByVal Tensor quantile(@Const @ByRef Tensor self, double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); +@Namespace("at") public static native @ByVal Tensor quantile(@Const @ByRef Tensor self, double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); @Namespace("at") public static native @ByVal Tensor quantile(@Const @ByRef Tensor self, double q); -@Namespace("at") public static native @ByVal Tensor quantile(@Const @ByRef Tensor self, double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); +@Namespace("at") public static native @ByVal Tensor quantile(@Const @ByRef Tensor self, double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); // aten::quantile.scalar_out(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation='linear', Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor quantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); +@Namespace("at") public static native @ByRef Tensor quantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView BytePointer interpolation/*="linear"*/); @Namespace("at") public static native @ByRef Tensor quantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, double q); -@Namespace("at") public static native @ByRef Tensor quantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, double q, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); +@Namespace("at") public static native @ByRef Tensor quantile_out(@ByRef Tensor out, @Const @ByRef Tensor self, double q, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @Cast("bool") boolean keepdim/*=false*/, @StringView String interpolation/*="linear"*/); // aten::quantile.scalar_out(Tensor self, float q, int? dim=None, bool keepdim=False, *, str interpolation='linear', Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor quantile_outf(@Const @ByRef Tensor self, double q, @ByVal LongOptional dim, @Cast("bool") boolean keepdim, @StringView BytePointer interpolation, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor quantile_outf(@Const @ByRef Tensor self, double q, @ByVal LongOptional dim, @Cast("bool") boolean keepdim, @StringView String interpolation, @ByRef Tensor out); @@ -45482,13 +45403,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::rand_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor rand_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor rand_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor rand_like(@Const @ByRef Tensor self); // aten::rand_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor @Namespace("at") public static native @ByVal Tensor rand_like(@Const @ByRef Tensor self, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal MemoryFormatOptional memory_format); // aten::rand_like.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor rand_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor rand_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor rand_like_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::rand_like.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor rand_like_outf(@Const @ByRef Tensor self, @ByVal MemoryFormatOptional memory_format, @ByRef Tensor out); @@ -45704,7 +45625,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::randint_like(Tensor self, SymInt high, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long high); @@ -45713,7 +45634,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::randint_like(Tensor self, SymInt high, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor randint_like_symint(@Const @ByRef Tensor self, @ByVal SymInt high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor randint_like_symint(@Const @ByRef Tensor self, @ByVal SymInt high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor randint_like_symint(@Const @ByRef Tensor self, @ByVal SymInt high); @@ -45722,7 +45643,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::randint_like.low_dtype(Tensor self, SymInt low, SymInt high, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long low, @Cast("int64_t") long high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long low, @Cast("int64_t") long high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long low, @Cast("int64_t") long high); @@ -45731,7 +45652,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::randint_like.low_dtype(Tensor self, SymInt low, SymInt high, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor randint_like_symint(@Const @ByRef Tensor self, @ByVal SymInt low, @ByVal SymInt high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor randint_like_symint(@Const @ByRef Tensor self, @ByVal SymInt low, @ByVal SymInt high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor randint_like_symint(@Const @ByRef Tensor self, @ByVal SymInt low, @ByVal SymInt high); @@ -45740,7 +45661,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::randint_like.out(Tensor self, SymInt high, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor randint_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long high, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor randint_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long high, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor randint_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long high); @@ -45749,7 +45670,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::randint_like.out(Tensor self, SymInt high, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor randint_like_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymInt high, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor randint_like_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymInt high, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor randint_like_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymInt high); @@ -45758,7 +45679,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::randint_like.low_dtype_out(Tensor self, SymInt low, SymInt high, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor randint_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long low, @Cast("int64_t") long high, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor randint_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long low, @Cast("int64_t") long high, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor randint_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long low, @Cast("int64_t") long high); @@ -45767,7 +45688,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::randint_like.low_dtype_out(Tensor self, SymInt low, SymInt high, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor randint_like_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymInt low, @ByVal SymInt high, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor randint_like_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymInt low, @ByVal SymInt high, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor randint_like_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymInt low, @ByVal SymInt high); @@ -45986,13 +45907,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::randn_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor randn_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor randn_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor randn_like(@Const @ByRef Tensor self); // aten::randn_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor @Namespace("at") public static native @ByVal Tensor randn_like(@Const @ByRef Tensor self, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal MemoryFormatOptional memory_format); // aten::randn_like.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor randn_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor randn_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor randn_like_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::randn_like.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor randn_like_outf(@Const @ByRef Tensor self, @ByVal MemoryFormatOptional memory_format, @ByRef Tensor out); @@ -46025,33 +45946,33 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::random.from_out(Tensor self, int from, int? to, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor random_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long from, @ByVal LongOptional to, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor random_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long from, @ByVal LongOptional to, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor random_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long from, @ByVal LongOptional to); // aten::random.from_out(Tensor self, int from, int? to, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor random_outf(@Const @ByRef Tensor self, @Cast("int64_t") long from, @ByVal LongOptional to, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::random.from(Tensor self, int from, int? to, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor random(@Const @ByRef Tensor self, @Cast("int64_t") long from, @ByVal LongOptional to, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor random(@Const @ByRef Tensor self, @Cast("int64_t") long from, @ByVal LongOptional to, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor random(@Const @ByRef Tensor self, @Cast("int64_t") long from, @ByVal LongOptional to); // aten::random.to_out(Tensor self, int to, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor random_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long to, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor random_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long to, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor random_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long to); // aten::random.to_out(Tensor self, int to, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor random_outf(@Const @ByRef Tensor self, @Cast("int64_t") long to, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::random.to(Tensor self, int to, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor random(@Const @ByRef Tensor self, @Cast("int64_t") long to, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor random(@Const @ByRef Tensor self, @Cast("int64_t") long to, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor random(@Const @ByRef Tensor self, @Cast("int64_t") long to); // aten::random.out(Tensor self, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor random_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor random_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor random_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::random.out(Tensor self, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor random_outf(@Const @ByRef Tensor self, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::random(Tensor self, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor random(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor random(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor random(@Const @ByRef Tensor self); @@ -46933,37 +46854,37 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::repeat_interleave.Tensor(Tensor repeats, *, SymInt? output_size=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional output_size); +@Namespace("at") public static native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional output_size); @Namespace("at") public static native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor repeats); // aten::repeat_interleave.Tensor(Tensor repeats, *, SymInt? output_size=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional output_size); +@Namespace("at") public static native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional output_size); @Namespace("at") public static native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor repeats); // aten::repeat_interleave.self_Tensor(Tensor self, Tensor repeats, int? dim=None, *, SymInt? output_size=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor self, @Const @ByRef Tensor repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional output_size); +@Namespace("at") public static native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor self, @Const @ByRef Tensor repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional output_size); @Namespace("at") public static native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor self, @Const @ByRef Tensor repeats); // aten::repeat_interleave.self_Tensor(Tensor self, Tensor repeats, int? dim=None, *, SymInt? output_size=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional output_size); +@Namespace("at") public static native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional output_size); @Namespace("at") public static native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor repeats); // aten::repeat_interleave.self_int(Tensor self, SymInt repeats, int? dim=None, *, SymInt? output_size=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor self, @Cast("int64_t") long repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional output_size); +@Namespace("at") public static native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor self, @Cast("int64_t") long repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional output_size); @Namespace("at") public static native @ByVal Tensor repeat_interleave(@Const @ByRef Tensor self, @Cast("int64_t") long repeats); // aten::repeat_interleave.self_int(Tensor self, SymInt repeats, int? dim=None, *, SymInt? output_size=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor self, @ByVal SymInt repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional output_size); +@Namespace("at") public static native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor self, @ByVal SymInt repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional output_size); @Namespace("at") public static native @ByVal Tensor repeat_interleave_symint(@Const @ByRef Tensor self, @ByVal SymInt repeats); // aten::repeat_interleave.Tensor_out(Tensor repeats, *, SymInt? output_size=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor repeat_interleave_out(@ByRef Tensor out, @Const @ByRef Tensor repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional output_size); +@Namespace("at") public static native @ByRef Tensor repeat_interleave_out(@ByRef Tensor out, @Const @ByRef Tensor repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional output_size); @Namespace("at") public static native @ByRef Tensor repeat_interleave_out(@ByRef Tensor out, @Const @ByRef Tensor repeats); @@ -46972,7 +46893,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::repeat_interleave.Tensor_out(Tensor repeats, *, SymInt? output_size=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor repeat_interleave_symint_out(@ByRef Tensor out, @Const @ByRef Tensor repeats, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional output_size); +@Namespace("at") public static native @ByRef Tensor repeat_interleave_symint_out(@ByRef Tensor out, @Const @ByRef Tensor repeats, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional output_size); @Namespace("at") public static native @ByRef Tensor repeat_interleave_symint_out(@ByRef Tensor out, @Const @ByRef Tensor repeats); @@ -47425,9 +47346,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::resize.out(Tensor self, SymInt[] size, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor resize_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @Const @ByRef Tensor resize_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @Const @ByRef Tensor resize_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef size); -@Namespace("at") public static native @Const @ByRef Tensor resize_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @Const @ByRef Tensor resize_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @Const @ByRef Tensor resize_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); @@ -47437,7 +47358,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::resize.out(Tensor self, SymInt[] size, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor resize_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @Const @ByRef Tensor resize_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @Const @ByRef Tensor resize_symint_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef size); @@ -47446,14 +47367,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::resize(Tensor self, SymInt[] size, *, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor resize(@Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor resize(@Const @ByRef Tensor self, @ByVal LongArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor resize(@Const @ByRef Tensor self, @ByVal LongArrayRef size); -@Namespace("at") public static native @ByVal Tensor resize(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor resize(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor resize(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); // aten::resize(Tensor self, SymInt[] size, *, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor resize_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor resize_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor resize_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef size); @@ -47485,17 +47406,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::resize_as_(Tensor(a!) self, Tensor the_template, *, MemoryFormat? memory_format=None) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor resize_as_(@Const @ByRef Tensor self, @Const @ByRef Tensor the_template, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @Const @ByRef Tensor resize_as_(@Const @ByRef Tensor self, @Const @ByRef Tensor the_template, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @Const @ByRef Tensor resize_as_(@Const @ByRef Tensor self, @Const @ByRef Tensor the_template); // aten::resize_as.out(Tensor self, Tensor the_template, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @Const @ByRef Tensor resize_as_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor the_template, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @Const @ByRef Tensor resize_as_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor the_template, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @Const @ByRef Tensor resize_as_out(@Const @ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor the_template); // aten::resize_as.out(Tensor self, Tensor the_template, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @Const @ByRef Tensor resize_as_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor the_template, @ByVal MemoryFormatOptional memory_format, @Const @ByRef Tensor out); // aten::resize_as(Tensor self, Tensor the_template, *, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor resize_as(@Const @ByRef Tensor self, @Const @ByRef Tensor the_template, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor resize_as(@Const @ByRef Tensor self, @Const @ByRef Tensor the_template, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor resize_as(@Const @ByRef Tensor self, @Const @ByRef Tensor the_template); @@ -47694,6 +47615,39 @@ public class torch extends org.bytedeco.pytorch.presets.torch { +// Parsed from ATen/ops/rms_norm.h + +// #pragma once + +// @generated by torchgen/gen.py from Function.h + +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include + + + +// #include + + +// aten::rms_norm(Tensor input, int[] normalized_shape, Tensor? weight=None, float? eps=None) -> Tensor +@Namespace("at") public static native @ByVal Tensor rms_norm(@Const @ByRef Tensor input, @ByVal LongArrayRef normalized_shape, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional eps); +@Namespace("at") public static native @ByVal Tensor rms_norm(@Const @ByRef Tensor input, @ByVal LongArrayRef normalized_shape); +@Namespace("at") public static native @ByVal Tensor rms_norm(@Const @ByRef Tensor input, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] normalized_shape, @Const @ByRef(nullValue = "std::optional{}") TensorOptional weight, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional eps); +@Namespace("at") public static native @ByVal Tensor rms_norm(@Const @ByRef Tensor input, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... normalized_shape); + + + + // Parsed from ATen/ops/rnn_relu.h // #pragma once @@ -47754,7 +47708,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::rnn_relu_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor rnn_relu_cell(@Const @ByRef Tensor input, @Const @ByRef Tensor hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional b_ih, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional b_hh); +@Namespace("at") public static native @ByVal Tensor rnn_relu_cell(@Const @ByRef Tensor input, @Const @ByRef Tensor hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh, @Const @ByRef(nullValue = "std::optional{}") TensorOptional b_ih, @Const @ByRef(nullValue = "std::optional{}") TensorOptional b_hh); @Namespace("at") public static native @ByVal Tensor rnn_relu_cell(@Const @ByRef Tensor input, @Const @ByRef Tensor hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh); @@ -47820,7 +47774,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::rnn_tanh_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor rnn_tanh_cell(@Const @ByRef Tensor input, @Const @ByRef Tensor hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional b_ih, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional b_hh); +@Namespace("at") public static native @ByVal Tensor rnn_tanh_cell(@Const @ByRef Tensor input, @Const @ByRef Tensor hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh, @Const @ByRef(nullValue = "std::optional{}") TensorOptional b_ih, @Const @ByRef(nullValue = "std::optional{}") TensorOptional b_hh); @Namespace("at") public static native @ByVal Tensor rnn_tanh_cell(@Const @ByRef Tensor input, @Const @ByRef Tensor hx, @Const @ByRef Tensor w_ih, @Const @ByRef Tensor w_hh); @@ -48104,11 +48058,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::rrelu(Tensor self, Scalar lower=0.125, Scalar upper=0.3333333333333333, bool training=False, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor rrelu(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(0.125)") Scalar lower, @Const @ByRef(nullValue = "at::Scalar(0.3333333333333333)") Scalar upper, @Cast("bool") boolean training/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor rrelu(@Const @ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(0.125)") Scalar lower, @Const @ByRef(nullValue = "at::Scalar(0.3333333333333333)") Scalar upper, @Cast("bool") boolean training/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor rrelu(@Const @ByRef Tensor self); // aten::rrelu_(Tensor(a!) self, Scalar lower=0.125, Scalar upper=0.3333333333333333, bool training=False, Generator? generator=None) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor rrelu_(@ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(0.125)") Scalar lower, @Const @ByRef(nullValue = "at::Scalar(0.3333333333333333)") Scalar upper, @Cast("bool") boolean training/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor rrelu_(@ByRef Tensor self, @Const @ByRef(nullValue = "at::Scalar(0.125)") Scalar lower, @Const @ByRef(nullValue = "at::Scalar(0.3333333333333333)") Scalar upper, @Cast("bool") boolean training/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor rrelu_(@ByRef Tensor self); @@ -48139,17 +48093,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::rrelu_with_noise.out(Tensor self, Tensor noise, Scalar lower=0.125, Scalar upper=0.3333333333333333, bool training=False, Generator? generator=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor rrelu_with_noise_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor noise, @Const @ByRef(nullValue = "at::Scalar(0.125)") Scalar lower, @Const @ByRef(nullValue = "at::Scalar(0.3333333333333333)") Scalar upper, @Cast("bool") boolean training/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor rrelu_with_noise_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor noise, @Const @ByRef(nullValue = "at::Scalar(0.125)") Scalar lower, @Const @ByRef(nullValue = "at::Scalar(0.3333333333333333)") Scalar upper, @Cast("bool") boolean training/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor rrelu_with_noise_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor noise); // aten::rrelu_with_noise.out(Tensor self, Tensor noise, Scalar lower=0.125, Scalar upper=0.3333333333333333, bool training=False, Generator? generator=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor rrelu_with_noise_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor noise, @Const @ByRef Scalar lower, @Const @ByRef Scalar upper, @Cast("bool") boolean training, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::rrelu_with_noise(Tensor self, Tensor noise, Scalar lower=0.125, Scalar upper=0.3333333333333333, bool training=False, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor rrelu_with_noise(@Const @ByRef Tensor self, @Const @ByRef Tensor noise, @Const @ByRef(nullValue = "at::Scalar(0.125)") Scalar lower, @Const @ByRef(nullValue = "at::Scalar(0.3333333333333333)") Scalar upper, @Cast("bool") boolean training/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor rrelu_with_noise(@Const @ByRef Tensor self, @Const @ByRef Tensor noise, @Const @ByRef(nullValue = "at::Scalar(0.125)") Scalar lower, @Const @ByRef(nullValue = "at::Scalar(0.3333333333333333)") Scalar upper, @Cast("bool") boolean training/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor rrelu_with_noise(@Const @ByRef Tensor self, @Const @ByRef Tensor noise); // aten::rrelu_with_noise_(Tensor(a!) self, Tensor noise, Scalar lower=0.125, Scalar upper=0.3333333333333333, bool training=False, Generator? generator=None) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor rrelu_with_noise_(@ByRef Tensor self, @Const @ByRef Tensor noise, @Const @ByRef(nullValue = "at::Scalar(0.125)") Scalar lower, @Const @ByRef(nullValue = "at::Scalar(0.3333333333333333)") Scalar upper, @Cast("bool") boolean training/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor rrelu_with_noise_(@ByRef Tensor self, @Const @ByRef Tensor noise, @Const @ByRef(nullValue = "at::Scalar(0.125)") Scalar lower, @Const @ByRef(nullValue = "at::Scalar(0.3333333333333333)") Scalar upper, @Cast("bool") boolean training/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor rrelu_with_noise_(@ByRef Tensor self, @Const @ByRef Tensor noise); @@ -48381,7 +48335,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::scaled_dot_product_attention(Tensor query, Tensor key, Tensor value, Tensor? attn_mask=None, float dropout_p=0.0, bool is_causal=False, *, float? scale=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor scaled_dot_product_attention(@Const @ByRef Tensor query, @Const @ByRef Tensor key, @Const @ByRef Tensor value, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional attn_mask, double dropout_p/*=0.0*/, @Cast("bool") boolean is_causal/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scale); +@Namespace("at") public static native @ByVal Tensor scaled_dot_product_attention(@Const @ByRef Tensor query, @Const @ByRef Tensor key, @Const @ByRef Tensor value, @Const @ByRef(nullValue = "std::optional{}") TensorOptional attn_mask, double dropout_p/*=0.0*/, @Cast("bool") boolean is_causal/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scale); @Namespace("at") public static native @ByVal Tensor scaled_dot_product_attention(@Const @ByRef Tensor query, @Const @ByRef Tensor key, @Const @ByRef Tensor value); @@ -48563,21 +48517,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::searchsorted.Tensor(Tensor sorted_sequence, Tensor self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor searchsorted(@Const @ByRef Tensor sorted_sequence, @Const @ByRef Tensor self, @Cast("bool") boolean out_int32/*=false*/, @Cast("bool") boolean right/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional side, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional sorter); +@Namespace("at") public static native @ByVal Tensor searchsorted(@Const @ByRef Tensor sorted_sequence, @Const @ByRef Tensor self, @Cast("bool") boolean out_int32/*=false*/, @Cast("bool") boolean right/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional side, @Const @ByRef(nullValue = "std::optional{}") TensorOptional sorter); @Namespace("at") public static native @ByVal Tensor searchsorted(@Const @ByRef Tensor sorted_sequence, @Const @ByRef Tensor self); // aten::searchsorted.Tensor_out(Tensor sorted_sequence, Tensor self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor searchsorted_out(@ByRef Tensor out, @Const @ByRef Tensor sorted_sequence, @Const @ByRef Tensor self, @Cast("bool") boolean out_int32/*=false*/, @Cast("bool") boolean right/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional side, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional sorter); +@Namespace("at") public static native @ByRef Tensor searchsorted_out(@ByRef Tensor out, @Const @ByRef Tensor sorted_sequence, @Const @ByRef Tensor self, @Cast("bool") boolean out_int32/*=false*/, @Cast("bool") boolean right/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional side, @Const @ByRef(nullValue = "std::optional{}") TensorOptional sorter); @Namespace("at") public static native @ByRef Tensor searchsorted_out(@ByRef Tensor out, @Const @ByRef Tensor sorted_sequence, @Const @ByRef Tensor self); // aten::searchsorted.Tensor_out(Tensor sorted_sequence, Tensor self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor searchsorted_outf(@Const @ByRef Tensor sorted_sequence, @Const @ByRef Tensor self, @Cast("bool") boolean out_int32, @Cast("bool") boolean right, @ByVal StringViewOptional side, @Const @ByRef TensorOptional sorter, @ByRef Tensor out); // aten::searchsorted.Scalar(Tensor sorted_sequence, Scalar self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor searchsorted(@Const @ByRef Tensor sorted_sequence, @Const @ByRef Scalar self, @Cast("bool") boolean out_int32/*=false*/, @Cast("bool") boolean right/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional side, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional sorter); +@Namespace("at") public static native @ByVal Tensor searchsorted(@Const @ByRef Tensor sorted_sequence, @Const @ByRef Scalar self, @Cast("bool") boolean out_int32/*=false*/, @Cast("bool") boolean right/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional side, @Const @ByRef(nullValue = "std::optional{}") TensorOptional sorter); @Namespace("at") public static native @ByVal Tensor searchsorted(@Const @ByRef Tensor sorted_sequence, @Const @ByRef Scalar self); // aten::searchsorted.Scalar_out(Tensor sorted_sequence, Scalar self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor searchsorted_out(@ByRef Tensor out, @Const @ByRef Tensor sorted_sequence, @Const @ByRef Scalar self, @Cast("bool") boolean out_int32/*=false*/, @Cast("bool") boolean right/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional side, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional sorter); +@Namespace("at") public static native @ByRef Tensor searchsorted_out(@ByRef Tensor out, @Const @ByRef Tensor sorted_sequence, @Const @ByRef Scalar self, @Cast("bool") boolean out_int32/*=false*/, @Cast("bool") boolean right/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") StringViewOptional side, @Const @ByRef(nullValue = "std::optional{}") TensorOptional sorter); @Namespace("at") public static native @ByRef Tensor searchsorted_out(@ByRef Tensor out, @Const @ByRef Tensor sorted_sequence, @Const @ByRef Scalar self); // aten::searchsorted.Scalar_out(Tensor sorted_sequence, Scalar self, *, bool out_int32=False, bool right=False, str? side=None, Tensor? sorter=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor searchsorted_outf(@Const @ByRef Tensor sorted_sequence, @Const @ByRef Scalar self, @Cast("bool") boolean out_int32, @Cast("bool") boolean right, @ByVal StringViewOptional side, @Const @ByRef TensorOptional sorter, @ByRef Tensor out); @@ -48610,15 +48564,15 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::segment_reduce(Tensor data, str reduce, *, Tensor? lengths=None, Tensor? indices=None, Tensor? offsets=None, int axis=0, bool unsafe=False, Scalar? initial=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor segment_reduce(@Const @ByRef Tensor data, @StringView BytePointer reduce, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional lengths, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional indices, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional offsets, @Cast("int64_t") long axis/*=0*/, @Cast("bool") boolean unsafe/*=false*/, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional initial); +@Namespace("at") public static native @ByVal Tensor segment_reduce(@Const @ByRef Tensor data, @StringView BytePointer reduce, @Const @ByRef(nullValue = "std::optional{}") TensorOptional lengths, @Const @ByRef(nullValue = "std::optional{}") TensorOptional indices, @Const @ByRef(nullValue = "std::optional{}") TensorOptional offsets, @Cast("int64_t") long axis/*=0*/, @Cast("bool") boolean unsafe/*=false*/, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional initial); @Namespace("at") public static native @ByVal Tensor segment_reduce(@Const @ByRef Tensor data, @StringView BytePointer reduce); -@Namespace("at") public static native @ByVal Tensor segment_reduce(@Const @ByRef Tensor data, @StringView String reduce, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional lengths, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional indices, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional offsets, @Cast("int64_t") long axis/*=0*/, @Cast("bool") boolean unsafe/*=false*/, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional initial); +@Namespace("at") public static native @ByVal Tensor segment_reduce(@Const @ByRef Tensor data, @StringView String reduce, @Const @ByRef(nullValue = "std::optional{}") TensorOptional lengths, @Const @ByRef(nullValue = "std::optional{}") TensorOptional indices, @Const @ByRef(nullValue = "std::optional{}") TensorOptional offsets, @Cast("int64_t") long axis/*=0*/, @Cast("bool") boolean unsafe/*=false*/, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional initial); @Namespace("at") public static native @ByVal Tensor segment_reduce(@Const @ByRef Tensor data, @StringView String reduce); // aten::segment_reduce.out(Tensor data, str reduce, *, Tensor? lengths=None, Tensor? indices=None, Tensor? offsets=None, int axis=0, bool unsafe=False, Scalar? initial=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor segment_reduce_out(@ByRef Tensor out, @Const @ByRef Tensor data, @StringView BytePointer reduce, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional lengths, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional indices, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional offsets, @Cast("int64_t") long axis/*=0*/, @Cast("bool") boolean unsafe/*=false*/, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional initial); +@Namespace("at") public static native @ByRef Tensor segment_reduce_out(@ByRef Tensor out, @Const @ByRef Tensor data, @StringView BytePointer reduce, @Const @ByRef(nullValue = "std::optional{}") TensorOptional lengths, @Const @ByRef(nullValue = "std::optional{}") TensorOptional indices, @Const @ByRef(nullValue = "std::optional{}") TensorOptional offsets, @Cast("int64_t") long axis/*=0*/, @Cast("bool") boolean unsafe/*=false*/, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional initial); @Namespace("at") public static native @ByRef Tensor segment_reduce_out(@ByRef Tensor out, @Const @ByRef Tensor data, @StringView BytePointer reduce); -@Namespace("at") public static native @ByRef Tensor segment_reduce_out(@ByRef Tensor out, @Const @ByRef Tensor data, @StringView String reduce, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional lengths, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional indices, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional offsets, @Cast("int64_t") long axis/*=0*/, @Cast("bool") boolean unsafe/*=false*/, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional initial); +@Namespace("at") public static native @ByRef Tensor segment_reduce_out(@ByRef Tensor out, @Const @ByRef Tensor data, @StringView String reduce, @Const @ByRef(nullValue = "std::optional{}") TensorOptional lengths, @Const @ByRef(nullValue = "std::optional{}") TensorOptional indices, @Const @ByRef(nullValue = "std::optional{}") TensorOptional offsets, @Cast("int64_t") long axis/*=0*/, @Cast("bool") boolean unsafe/*=false*/, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional initial); @Namespace("at") public static native @ByRef Tensor segment_reduce_out(@ByRef Tensor out, @Const @ByRef Tensor data, @StringView String reduce); // aten::segment_reduce.out(Tensor data, str reduce, *, Tensor? lengths=None, Tensor? indices=None, Tensor? offsets=None, int axis=0, bool unsafe=False, Scalar? initial=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor segment_reduce_outf(@Const @ByRef Tensor data, @StringView BytePointer reduce, @Const @ByRef TensorOptional lengths, @Const @ByRef TensorOptional indices, @Const @ByRef TensorOptional offsets, @Cast("int64_t") long axis, @Cast("bool") boolean unsafe, @Const @ByRef ScalarOptional initial, @ByRef Tensor out); @@ -49392,12 +49346,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slice.Tensor(Tensor(a) self, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor(a) -@Namespace("at") public static native @ByVal Tensor slice(@Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); +@Namespace("at") public static native @ByVal Tensor slice(@Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); @Namespace("at") public static native @ByVal Tensor slice(@Const @ByRef Tensor self); // aten::slice.Tensor(Tensor(a) self, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor(a) -@Namespace("at") public static native @ByVal Tensor slice_symint(@Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); +@Namespace("at") public static native @ByVal Tensor slice_symint(@Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); @Namespace("at") public static native @ByVal Tensor slice_symint(@Const @ByRef Tensor self); @@ -49483,17 +49437,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slice_copy.Tensor(Tensor self, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slice_copy(@Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); +@Namespace("at") public static native @ByVal Tensor slice_copy(@Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); @Namespace("at") public static native @ByVal Tensor slice_copy(@Const @ByRef Tensor self); // aten::slice_copy.Tensor(Tensor self, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slice_copy_symint(@Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); +@Namespace("at") public static native @ByVal Tensor slice_copy_symint(@Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); @Namespace("at") public static native @ByVal Tensor slice_copy_symint(@Const @ByRef Tensor self); // aten::slice_copy.Tensor_out(Tensor self, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slice_copy_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); +@Namespace("at") public static native @ByRef Tensor slice_copy_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); @Namespace("at") public static native @ByRef Tensor slice_copy_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -49502,7 +49456,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slice_copy.Tensor_out(Tensor self, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slice_copy_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); +@Namespace("at") public static native @ByRef Tensor slice_copy_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); @Namespace("at") public static native @ByRef Tensor slice_copy_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self); @@ -49538,12 +49492,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slice_inverse(Tensor(a) self, Tensor src, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor(a) -@Namespace("at") public static native @ByVal Tensor slice_inverse(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); +@Namespace("at") public static native @ByVal Tensor slice_inverse(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); @Namespace("at") public static native @ByVal Tensor slice_inverse(@Const @ByRef Tensor self, @Const @ByRef Tensor src); // aten::slice_inverse(Tensor(a) self, Tensor src, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor(a) -@Namespace("at") public static native @ByVal Tensor slice_inverse_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); +@Namespace("at") public static native @ByVal Tensor slice_inverse_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); @Namespace("at") public static native @ByVal Tensor slice_inverse_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor src); @@ -49575,17 +49529,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slice_scatter(Tensor self, Tensor src, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slice_scatter(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); +@Namespace("at") public static native @ByVal Tensor slice_scatter(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); @Namespace("at") public static native @ByVal Tensor slice_scatter(@Const @ByRef Tensor self, @Const @ByRef Tensor src); // aten::slice_scatter(Tensor self, Tensor src, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slice_scatter_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); +@Namespace("at") public static native @ByVal Tensor slice_scatter_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); @Namespace("at") public static native @ByVal Tensor slice_scatter_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor src); // aten::slice_scatter.out(Tensor self, Tensor src, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slice_scatter_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); +@Namespace("at") public static native @ByRef Tensor slice_scatter_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional end, @Cast("int64_t") long step/*=1*/); @Namespace("at") public static native @ByRef Tensor slice_scatter_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src); @@ -49594,7 +49548,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slice_scatter.out(Tensor self, Tensor src, int dim=0, SymInt? start=None, SymInt? end=None, SymInt step=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slice_scatter_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional start, @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); +@Namespace("at") public static native @ByRef Tensor slice_scatter_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src, @Cast("int64_t") long dim/*=0*/, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional start, @ByVal(nullValue = "std::optional(::std::nullopt)") SymIntOptional end, @ByVal(nullValue = "c10::SymInt(1)") SymInt step); @Namespace("at") public static native @ByRef Tensor slice_scatter_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor src); @@ -49665,9 +49619,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv3d.out(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slow_conv3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding); +@Namespace("at") public static native @ByRef Tensor slow_conv3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding); @Namespace("at") public static native @ByRef Tensor slow_conv3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByRef Tensor slow_conv3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... padding); +@Namespace("at") public static native @ByRef Tensor slow_conv3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... padding); @Namespace("at") public static native @ByRef Tensor slow_conv3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); @@ -49677,7 +49631,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv3d.out(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slow_conv3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding); +@Namespace("at") public static native @ByRef Tensor slow_conv3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding); @Namespace("at") public static native @ByRef Tensor slow_conv3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); @@ -49686,14 +49640,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv3d(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0) -> Tensor -@Namespace("at") public static native @ByVal Tensor slow_conv3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding); +@Namespace("at") public static native @ByVal Tensor slow_conv3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding); @Namespace("at") public static native @ByVal Tensor slow_conv3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByVal Tensor slow_conv3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... padding); +@Namespace("at") public static native @ByVal Tensor slow_conv3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... padding); @Namespace("at") public static native @ByVal Tensor slow_conv3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); // aten::slow_conv3d(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0) -> Tensor -@Namespace("at") public static native @ByVal Tensor slow_conv3d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding); +@Namespace("at") public static native @ByVal Tensor slow_conv3d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding); @Namespace("at") public static native @ByVal Tensor slow_conv3d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); @@ -49779,21 +49733,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv_dilated2d(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slow_conv_dilated2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_dilated2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_dilated2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByVal Tensor slow_conv_dilated2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_dilated2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_dilated2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); // aten::slow_conv_dilated2d(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slow_conv_dilated2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_dilated2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_dilated2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); // aten::slow_conv_dilated2d.out(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] dilation=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slow_conv_dilated2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_dilated2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_dilated2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByRef Tensor slow_conv_dilated2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_dilated2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_dilated2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); @@ -49803,7 +49757,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv_dilated2d.out(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] dilation=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slow_conv_dilated2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_dilated2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_dilated2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); @@ -49839,21 +49793,21 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv_dilated3d(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slow_conv_dilated3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_dilated3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_dilated3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByVal Tensor slow_conv_dilated3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_dilated3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_dilated3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); // aten::slow_conv_dilated3d(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slow_conv_dilated3d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_dilated3d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_dilated3d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); // aten::slow_conv_dilated3d.out(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] dilation=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slow_conv_dilated3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_dilated3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_dilated3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByRef Tensor slow_conv_dilated3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_dilated3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_dilated3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); @@ -49863,7 +49817,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv_dilated3d.out(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] dilation=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slow_conv_dilated3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_dilated3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_dilated3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); @@ -49899,9 +49853,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv_transpose2d.out(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] output_padding=0, SymInt[2] dilation=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slow_conv_transpose2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_transpose2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_transpose2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByRef Tensor slow_conv_transpose2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_transpose2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_transpose2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); @@ -49911,7 +49865,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv_transpose2d.out(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] output_padding=0, SymInt[2] dilation=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slow_conv_transpose2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_transpose2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_transpose2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); @@ -49920,14 +49874,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv_transpose2d(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] output_padding=0, SymInt[2] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slow_conv_transpose2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_transpose2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_transpose2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByVal Tensor slow_conv_transpose2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_transpose2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_transpose2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); // aten::slow_conv_transpose2d(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, SymInt[2] output_padding=0, SymInt[2] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slow_conv_transpose2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_transpose2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_transpose2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); @@ -49959,9 +49913,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv_transpose3d.out(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] output_padding=0, SymInt[3] dilation=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slow_conv_transpose3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_transpose3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_transpose3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByRef Tensor slow_conv_transpose3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_transpose3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_transpose3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); @@ -49971,7 +49925,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv_transpose3d.out(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] output_padding=0, SymInt[3] dilation=1, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor slow_conv_transpose3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); +@Namespace("at") public static native @ByRef Tensor slow_conv_transpose3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); @Namespace("at") public static native @ByRef Tensor slow_conv_transpose3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); @@ -49980,14 +49934,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::slow_conv_transpose3d(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] output_padding=0, SymInt[3] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slow_conv_transpose3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_transpose3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_transpose3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByVal Tensor slow_conv_transpose3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_transpose3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] padding, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_padding, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_transpose3d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); // aten::slow_conv_transpose3d(Tensor self, Tensor weight, SymInt[3] kernel_size, Tensor? bias=None, SymInt[3] stride=1, SymInt[3] padding=0, SymInt[3] output_padding=0, SymInt[3] dilation=1) -> Tensor -@Namespace("at") public static native @ByVal Tensor slow_conv_transpose3d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); +@Namespace("at") public static native @ByVal Tensor slow_conv_transpose3d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef output_padding, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef dilation); @Namespace("at") public static native @ByVal Tensor slow_conv_transpose3d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); @@ -50193,17 +50147,17 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::softmax.int(Tensor self, int dim, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim); // aten::softmax.int_out(Tensor self, int dim, ScalarType? dtype=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor softmax_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor softmax_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor softmax_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Cast("int64_t") long dim); // aten::softmax.int_out(Tensor self, int dim, ScalarType? dtype=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor softmax_outf(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); // aten::softmax.Dimname(Tensor self, Dimname dim, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor softmax(@Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor softmax(@Const @ByRef Tensor self, @ByVal Dimname dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor softmax(@Const @ByRef Tensor self, @ByVal Dimname dim); @@ -50580,15 +50534,15 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory); // aten::sparse_coo_tensor.indices(Tensor indices, Tensor values, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, bool? is_coalesced=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values); // aten::sparse_coo_tensor.indices(Tensor indices, Tensor values, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, bool? is_coalesced=None) -> Tensor @Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal BoolOptional is_coalesced); // aten::sparse_coo_tensor.indices_size(Tensor indices, Tensor values, int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, bool? is_coalesced=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size); -@Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); // aten::sparse_coo_tensor.indices_size(Tensor indices, Tensor values, int[] size, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, bool? is_coalesced=None) -> Tensor @Namespace("at") public static native @ByVal Tensor sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal BoolOptional is_coalesced); @@ -52093,7 +52047,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::special_log_softmax(Tensor self, int dim, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor special_log_softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor special_log_softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor special_log_softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim); @@ -52124,11 +52078,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::special_logit(Tensor self, float? eps=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor special_logit(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional eps); +@Namespace("at") public static native @ByVal Tensor special_logit(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional eps); @Namespace("at") public static native @ByVal Tensor special_logit(@Const @ByRef Tensor self); // aten::special_logit.out(Tensor self, float? eps=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor special_logit_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional eps); +@Namespace("at") public static native @ByRef Tensor special_logit_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional eps); @Namespace("at") public static native @ByRef Tensor special_logit_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::special_logit.out(Tensor self, float? eps=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor special_logit_outf(@Const @ByRef Tensor self, @ByVal DoubleOptional eps, @ByRef Tensor out); @@ -52864,7 +52818,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::special_softmax(Tensor self, int dim, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor special_softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor special_softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor special_softmax(@Const @ByRef Tensor self, @Cast("int64_t") long dim); @@ -53553,9 +53507,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean unbiased); // aten::std.correction(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False) -> Tensor -@Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); // aten::std.out(Tensor self, int[1]? dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim/*=false*/); @@ -53567,9 +53521,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByRef Tensor std_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim, @ByRef Tensor out); // aten::std.correction_out(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); // aten::std.correction_out(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor std_outf(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Const @ByRef ScalarOptional correction, @Cast("bool") boolean keepdim, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor std_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef ScalarOptional correction, @Cast("bool") boolean keepdim, @ByRef Tensor out); @@ -53590,15 +53544,15 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByRef Tensor std_outf(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim, @ByRef Tensor out); // aten::std.correction_names(Tensor self, Dimname[1] dim, *, Scalar? correction=None, bool keepdim=False) -> Tensor -@Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim); -@Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal Tensor std(@Const @ByRef Tensor self, @ByVal DimnameVector dim); // aten::std.correction_names_out(Tensor self, Dimname[1] dim, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim); -@Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByRef Tensor std_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim); // aten::std.correction_names_out(Tensor self, Dimname[1] dim, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor std_outf(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef ScalarOptional correction, @Cast("bool") boolean keepdim, @ByRef Tensor out); @@ -53641,9 +53595,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean unbiased); // aten::std_mean.correction(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False) -> (Tensor, Tensor) -@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); // aten::std_mean.names_dim(Tensor self, Dimname[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) @Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim/*=false*/); @@ -53652,15 +53606,15 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean unbiased); // aten::std_mean.correction_names(Tensor self, Dimname[1] dim, *, Scalar? correction=None, bool keepdim=False) -> (Tensor, Tensor) -@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim); -@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T std_mean(@Const @ByRef Tensor self, @ByVal DimnameVector dim); // aten::std_mean.correction_out(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out0, Tensor(b!) out1) -> (Tensor(a!), Tensor(b!)) -@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean_out(@ByRef Tensor out0, @ByRef Tensor out1, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean_out(@ByRef Tensor out0, @ByRef Tensor out1, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T std_mean_out(@ByRef Tensor out0, @ByRef Tensor out1, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean_out(@ByRef Tensor out0, @ByRef Tensor out1, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T std_mean_out(@ByRef Tensor out0, @ByRef Tensor out1, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); // aten::std_mean.correction_out(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out0, Tensor(b!) out1) -> (Tensor(a!), Tensor(b!)) @Namespace("at") public static native @ByVal T_TensorTensor_T std_mean_outf(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Const @ByRef ScalarOptional correction, @Cast("bool") boolean keepdim, @ByRef Tensor out0, @ByRef Tensor out1); @Namespace("at") public static native @ByVal T_TensorTensor_T std_mean_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef ScalarOptional correction, @Cast("bool") boolean keepdim, @ByRef Tensor out0, @ByRef Tensor out1); @@ -53693,11 +53647,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::stft(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool normalized=False, bool? onesided=None, bool? return_complex=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor stft(@Const @ByRef Tensor self, @Cast("int64_t") long n_fft, @ByVal LongOptional hop_length, @ByVal LongOptional win_length, @Const @ByRef TensorOptional window, @Cast("bool") boolean normalized, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional onesided, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional return_complex); +@Namespace("at") public static native @ByVal Tensor stft(@Const @ByRef Tensor self, @Cast("int64_t") long n_fft, @ByVal LongOptional hop_length, @ByVal LongOptional win_length, @Const @ByRef TensorOptional window, @Cast("bool") boolean normalized, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional onesided, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional return_complex); // aten::stft.center(Tensor self, int n_fft, int? hop_length=None, int? win_length=None, Tensor? window=None, bool center=True, str pad_mode="reflect", bool normalized=False, bool? onesided=None, bool? return_complex=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor stft(@Const @ByRef Tensor self, @Cast("int64_t") long n_fft, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional hop_length, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @StringView BytePointer pad_mode/*="reflect"*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional onesided, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional return_complex); -@Namespace("at") public static native @ByVal Tensor stft(@Const @ByRef Tensor self, @Cast("int64_t") long n_fft, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional hop_length, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @StringView String pad_mode/*="reflect"*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional onesided, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional return_complex); +@Namespace("at") public static native @ByVal Tensor stft(@Const @ByRef Tensor self, @Cast("int64_t") long n_fft, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional hop_length, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "std::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @StringView BytePointer pad_mode/*="reflect"*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional onesided, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional return_complex); +@Namespace("at") public static native @ByVal Tensor stft(@Const @ByRef Tensor self, @Cast("int64_t") long n_fft, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional hop_length, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional win_length, @Const @ByRef(nullValue = "std::optional{}") TensorOptional window, @Cast("bool") boolean center/*=true*/, @StringView String pad_mode/*="reflect"*/, @Cast("bool") boolean normalized/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional onesided, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional return_complex); @@ -53848,41 +53802,41 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::sum(Tensor self, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self); // aten::sum.dim_IntList(Tensor self, int[1]? dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim); -@Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dim); // aten::sum.dim_DimnameList(Tensor self, Dimname[1] dim, bool keepdim=False, *, ScalarType? dtype=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim); -@Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByVal Tensor sum(@Const @ByRef Tensor self, @ByVal DimnameVector dim); // aten::sum.IntList_out(Tensor self, int[1]? dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim); -@Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... dim); // aten::sum.IntList_out(Tensor self, int[1]? dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor sum_outf(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor sum_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); // aten::sum.DimnameList_out(Tensor self, Dimname[1] dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim); -@Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean keepdim/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim); // aten::sum.DimnameList_out(Tensor self, Dimname[1] dim, bool keepdim=False, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor sum_outf(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor sum_outf(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean keepdim, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); // aten::sum.out(Tensor self, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor sum_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::sum.out(Tensor self, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor sum_outf(@Const @ByRef Tensor self, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @@ -54040,7 +53994,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::sym_constrain_range(Scalar size, *, int? min=None, int? max=None) -> () -@Namespace("at") public static native void sym_constrain_range(@Const @ByRef Scalar size, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional min, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional max); +@Namespace("at") public static native void sym_constrain_range(@Const @ByRef Scalar size, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional min, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional max); @Namespace("at") public static native void sym_constrain_range(@Const @ByRef Scalar size); @@ -54071,7 +54025,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::sym_constrain_range_for_size(Scalar size, *, int? min=None, int? max=None) -> () -@Namespace("at") public static native void sym_constrain_range_for_size(@Const @ByRef Scalar size, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional min, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional max); +@Namespace("at") public static native void sym_constrain_range_for_size(@Const @ByRef Scalar size, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional min, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional max); @Namespace("at") public static native void sym_constrain_range_for_size(@Const @ByRef Scalar size); @@ -54322,13 +54276,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::take_along_dim.out(Tensor self, Tensor indices, int? dim=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor take_along_dim_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor indices, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim); +@Namespace("at") public static native @ByRef Tensor take_along_dim_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor indices, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim); @Namespace("at") public static native @ByRef Tensor take_along_dim_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor indices); // aten::take_along_dim.out(Tensor self, Tensor indices, int? dim=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor take_along_dim_outf(@Const @ByRef Tensor self, @Const @ByRef Tensor indices, @ByVal LongOptional dim, @ByRef Tensor out); // aten::take_along_dim(Tensor self, Tensor indices, int? dim=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor take_along_dim(@Const @ByRef Tensor self, @Const @ByRef Tensor indices, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim); +@Namespace("at") public static native @ByVal Tensor take_along_dim(@Const @ByRef Tensor self, @Const @ByRef Tensor indices, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim); @Namespace("at") public static native @ByVal Tensor take_along_dim(@Const @ByRef Tensor self, @Const @ByRef Tensor indices); @@ -54645,9 +54599,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::thnn_conv2d.out(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor thnn_conv2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding); +@Namespace("at") public static native @ByRef Tensor thnn_conv2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding); @Namespace("at") public static native @ByRef Tensor thnn_conv2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByRef Tensor thnn_conv2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... padding); +@Namespace("at") public static native @ByRef Tensor thnn_conv2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... padding); @Namespace("at") public static native @ByRef Tensor thnn_conv2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); @@ -54657,7 +54611,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::thnn_conv2d.out(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor thnn_conv2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding); +@Namespace("at") public static native @ByRef Tensor thnn_conv2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding); @Namespace("at") public static native @ByRef Tensor thnn_conv2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); @@ -54666,14 +54620,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::thnn_conv2d(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0) -> Tensor -@Namespace("at") public static native @ByVal Tensor thnn_conv2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding); +@Namespace("at") public static native @ByVal Tensor thnn_conv2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") LongArrayRef stride, @ByVal(nullValue = "at::IntArrayRef(0)") LongArrayRef padding); @Namespace("at") public static native @ByVal Tensor thnn_conv2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal LongArrayRef kernel_size); -@Namespace("at") public static native @ByVal Tensor thnn_conv2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... padding); +@Namespace("at") public static native @ByVal Tensor thnn_conv2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "at::IntArrayRef(1)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] stride, @ByVal(nullValue = "at::IntArrayRef(0)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... padding); @Namespace("at") public static native @ByVal Tensor thnn_conv2d(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... kernel_size); // aten::thnn_conv2d(Tensor self, Tensor weight, SymInt[2] kernel_size, Tensor? bias=None, SymInt[2] stride=1, SymInt[2] padding=0) -> Tensor -@Namespace("at") public static native @ByVal Tensor thnn_conv2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "c10::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding); +@Namespace("at") public static native @ByVal Tensor thnn_conv2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size, @Const @ByRef(nullValue = "std::optional{}") TensorOptional bias, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(1))") SymIntArrayRef stride, @ByVal(nullValue = "c10::SymIntArrayRef(c10::SymInt(0))") SymIntArrayRef padding); @Namespace("at") public static native @ByVal Tensor thnn_conv2d_symint(@Const @ByRef Tensor self, @Const @ByRef Tensor weight, @ByVal SymIntArrayRef kernel_size); @@ -54870,7 +54824,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::to_dense_backward(Tensor grad, Tensor input, bool? masked_grad=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor to_dense_backward(@Const @ByRef Tensor grad, @Const @ByRef Tensor input, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional masked_grad); +@Namespace("at") public static native @ByVal Tensor to_dense_backward(@Const @ByRef Tensor grad, @Const @ByRef Tensor input, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional masked_grad); @Namespace("at") public static native @ByVal Tensor to_dense_backward(@Const @ByRef Tensor grad, @Const @ByRef Tensor input); @@ -54901,7 +54855,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::to_mkldnn.out(Tensor self, ScalarType? dtype=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor to_mkldnn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype); +@Namespace("at") public static native @ByRef Tensor to_mkldnn_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") ScalarTypeOptional dtype); @Namespace("at") public static native @ByRef Tensor to_mkldnn_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::to_mkldnn.out(Tensor self, ScalarType? dtype=None, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor to_mkldnn_outf(@Const @ByRef Tensor self, @ByVal ScalarTypeOptional dtype, @ByRef Tensor out); @@ -54965,9 +54919,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::to_padded_tensor.out(Tensor self, float padding, SymInt[]? output_size=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor to_padded_tensor_out(@ByRef Tensor out, @Const @ByRef Tensor self, double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional output_size); +@Namespace("at") public static native @ByRef Tensor to_padded_tensor_out(@ByRef Tensor out, @Const @ByRef Tensor self, double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional output_size); @Namespace("at") public static native @ByRef Tensor to_padded_tensor_out(@ByRef Tensor out, @Const @ByRef Tensor self, double padding); -@Namespace("at") public static native @ByRef Tensor to_padded_tensor_out(@ByRef Tensor out, @Const @ByRef Tensor self, double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); +@Namespace("at") public static native @ByRef Tensor to_padded_tensor_out(@ByRef Tensor out, @Const @ByRef Tensor self, double padding, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long... output_size); // aten::to_padded_tensor.out(Tensor self, float padding, SymInt[]? output_size=None, *, Tensor(a!) out) -> Tensor(a!) @@ -54976,7 +54930,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::to_padded_tensor.out(Tensor self, float padding, SymInt[]? output_size=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor to_padded_tensor_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, double padding, @ByVal(nullValue = "at::OptionalSymIntArrayRef(c10::nullopt)") SymIntArrayRefOptional output_size); +@Namespace("at") public static native @ByRef Tensor to_padded_tensor_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, double padding, @ByVal(nullValue = "at::OptionalSymIntArrayRef(::std::nullopt)") SymIntArrayRefOptional output_size); @Namespace("at") public static native @ByRef Tensor to_padded_tensor_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, double padding); @@ -56008,13 +55962,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::uniform.out(Tensor self, float from=0, float to=1, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor uniform_out(@ByRef Tensor out, @Const @ByRef Tensor self, double from/*=0*/, double to/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByRef Tensor uniform_out(@ByRef Tensor out, @Const @ByRef Tensor self, double from/*=0*/, double to/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByRef Tensor uniform_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::uniform.out(Tensor self, float from=0, float to=1, *, Generator? generator=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor uniform_outf(@Const @ByRef Tensor self, double from, double to, @ByVal GeneratorOptional generator, @ByRef Tensor out); // aten::uniform(Tensor self, float from=0, float to=1, *, Generator? generator=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor uniform(@Const @ByRef Tensor self, double from/*=0*/, double to/*=1*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator); +@Namespace("at") public static native @ByVal Tensor uniform(@Const @ByRef Tensor self, double from/*=0*/, double to/*=1*/, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator); @Namespace("at") public static native @ByVal Tensor uniform(@Const @ByRef Tensor self); @@ -56045,11 +55999,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::unique_consecutive(Tensor self, bool return_inverse=False, bool return_counts=False, int? dim=None) -> (Tensor, Tensor, Tensor) -@Namespace("at") public static native @ByVal T_TensorTensorTensor_T unique_consecutive(@Const @ByRef Tensor self, @Cast("bool") boolean return_inverse/*=false*/, @Cast("bool") boolean return_counts/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim); +@Namespace("at") public static native @ByVal T_TensorTensorTensor_T unique_consecutive(@Const @ByRef Tensor self, @Cast("bool") boolean return_inverse/*=false*/, @Cast("bool") boolean return_counts/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim); @Namespace("at") public static native @ByVal T_TensorTensorTensor_T unique_consecutive(@Const @ByRef Tensor self); // aten::unique_consecutive.out(Tensor self, bool return_inverse=False, bool return_counts=False, int? dim=None, *, Tensor(a!) out0, Tensor(b!) out1, Tensor(c!) out2) -> (Tensor(a!), Tensor(b!), Tensor(c!)) -@Namespace("at") public static native @ByVal T_TensorTensorTensor_T unique_consecutive_out(@ByRef Tensor out0, @ByRef Tensor out1, @ByRef Tensor out2, @Const @ByRef Tensor self, @Cast("bool") boolean return_inverse/*=false*/, @Cast("bool") boolean return_counts/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional dim); +@Namespace("at") public static native @ByVal T_TensorTensorTensor_T unique_consecutive_out(@ByRef Tensor out0, @ByRef Tensor out1, @ByRef Tensor out2, @Const @ByRef Tensor self, @Cast("bool") boolean return_inverse/*=false*/, @Cast("bool") boolean return_counts/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional dim); @Namespace("at") public static native @ByVal T_TensorTensorTensor_T unique_consecutive_out(@ByRef Tensor out0, @ByRef Tensor out1, @ByRef Tensor out2, @Const @ByRef Tensor self); // aten::unique_consecutive.out(Tensor self, bool return_inverse=False, bool return_counts=False, int? dim=None, *, Tensor(a!) out0, Tensor(b!) out1, Tensor(c!) out2) -> (Tensor(a!), Tensor(b!), Tensor(c!)) @Namespace("at") public static native @ByVal T_TensorTensorTensor_T unique_consecutive_outf(@Const @ByRef Tensor self, @Cast("bool") boolean return_inverse, @Cast("bool") boolean return_counts, @ByVal LongOptional dim, @ByRef Tensor out0, @ByRef Tensor out1, @ByRef Tensor out2); @@ -56386,9 +56340,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bicubic2d.out(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners); @@ -56398,7 +56352,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bicubic2d.out(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners); @@ -56407,14 +56361,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bicubic2d(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bicubic2d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bicubic2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners); // aten::upsample_bicubic2d(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners); @@ -56446,9 +56400,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bicubic2d_backward.grad_input(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners); @@ -56458,7 +56412,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bicubic2d_backward.grad_input(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bicubic2d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners); @@ -56467,14 +56421,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bicubic2d_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners); // aten::upsample_bicubic2d_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bicubic2d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners); @@ -56516,9 +56470,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bilinear2d.out(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners); @@ -56528,7 +56482,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bilinear2d.out(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners); @@ -56537,14 +56491,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bilinear2d(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bilinear2d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bilinear2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners); // aten::upsample_bilinear2d(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners); @@ -56576,9 +56530,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bilinear2d_backward.grad_input(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners); @@ -56588,7 +56542,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bilinear2d_backward.grad_input(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_bilinear2d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners); @@ -56597,14 +56551,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_bilinear2d_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners); // aten::upsample_bilinear2d_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_bilinear2d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners); @@ -56646,9 +56600,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_linear1d.out(Tensor self, SymInt[1] output_size, bool align_corners, float? scales=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_linear1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_linear1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_linear1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByRef Tensor upsample_linear1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_linear1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_linear1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners); @@ -56658,7 +56612,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_linear1d.out(Tensor self, SymInt[1] output_size, bool align_corners, float? scales=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_linear1d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_linear1d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_linear1d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners); @@ -56667,14 +56621,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_linear1d(Tensor self, SymInt[1] output_size, bool align_corners, float? scales=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_linear1d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_linear1d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_linear1d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByVal Tensor upsample_linear1d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_linear1d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_linear1d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners); // aten::upsample_linear1d(Tensor self, SymInt[1] output_size, bool align_corners, float? scales=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_linear1d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_linear1d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_linear1d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners); @@ -56706,9 +56660,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_linear1d_backward.grad_input(Tensor grad_output, SymInt[1] output_size, SymInt[3] input_size, bool align_corners, float? scales=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_linear1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_linear1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_linear1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByRef Tensor upsample_linear1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_linear1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_linear1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners); @@ -56718,7 +56672,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_linear1d_backward.grad_input(Tensor grad_output, SymInt[1] output_size, SymInt[3] input_size, bool align_corners, float? scales=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_linear1d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_linear1d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_linear1d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners); @@ -56727,14 +56681,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_linear1d_backward(Tensor grad_output, SymInt[1] output_size, SymInt[3] input_size, bool align_corners, float? scales=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_linear1d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_linear1d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_linear1d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByVal Tensor upsample_linear1d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_linear1d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_linear1d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners); // aten::upsample_linear1d_backward(Tensor grad_output, SymInt[1] output_size, SymInt[3] input_size, bool align_corners, float? scales=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_linear1d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_linear1d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_linear1d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners); @@ -56776,9 +56730,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest1d.out(Tensor self, SymInt[1] output_size, float? scales=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_nearest1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size); -@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_nearest1d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... output_size); @@ -56788,7 +56742,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest1d.out(Tensor self, SymInt[1] output_size, float? scales=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_nearest1d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size); @@ -56797,14 +56751,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest1d(Tensor self, SymInt[1] output_size, float? scales=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest1d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_nearest1d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_nearest1d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size); -@Namespace("at") public static native @ByVal Tensor upsample_nearest1d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_nearest1d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_nearest1d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... output_size); // aten::upsample_nearest1d(Tensor self, SymInt[1] output_size, float? scales=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest1d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_nearest1d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_nearest1d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size); @@ -56836,9 +56790,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest1d_backward.grad_input(Tensor grad_output, SymInt[1] output_size, SymInt[3] input_size, float? scales=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_nearest1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size); -@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_nearest1d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... input_size); @@ -56848,7 +56802,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest1d_backward.grad_input(Tensor grad_output, SymInt[1] output_size, SymInt[3] input_size, float? scales=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByRef Tensor upsample_nearest1d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByRef Tensor upsample_nearest1d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size); @@ -56857,14 +56811,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest1d_backward(Tensor grad_output, SymInt[1] output_size, SymInt[3] input_size, float? scales=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest1d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_nearest1d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_nearest1d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size); -@Namespace("at") public static native @ByVal Tensor upsample_nearest1d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_nearest1d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_nearest1d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... input_size); // aten::upsample_nearest1d_backward(Tensor grad_output, SymInt[1] output_size, SymInt[3] input_size, float? scales=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest1d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales); +@Namespace("at") public static native @ByVal Tensor upsample_nearest1d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales); @Namespace("at") public static native @ByVal Tensor upsample_nearest1d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size); @@ -56906,9 +56860,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest2d.out(Tensor self, SymInt[2] output_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size); -@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest2d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... output_size); @@ -56918,7 +56872,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest2d.out(Tensor self, SymInt[2] output_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest2d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size); @@ -56927,14 +56881,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest2d(Tensor self, SymInt[2] output_size, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest2d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest2d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest2d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size); -@Namespace("at") public static native @ByVal Tensor upsample_nearest2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest2d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... output_size); // aten::upsample_nearest2d(Tensor self, SymInt[2] output_size, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest2d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest2d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest2d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size); @@ -56966,9 +56920,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest2d_backward.grad_input(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size); -@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest2d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... input_size); @@ -56978,7 +56932,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest2d_backward.grad_input(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest2d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest2d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size); @@ -56987,14 +56941,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest2d_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest2d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest2d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest2d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size); -@Namespace("at") public static native @ByVal Tensor upsample_nearest2d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest2d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest2d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... input_size); // aten::upsample_nearest2d_backward(Tensor grad_output, SymInt[2] output_size, SymInt[4] input_size, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest2d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest2d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest2d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size); @@ -57036,9 +56990,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest3d.out(Tensor self, SymInt[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size); -@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... output_size); @@ -57048,7 +57002,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest3d.out(Tensor self, SymInt[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size); @@ -57057,14 +57011,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest3d(Tensor self, SymInt[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest3d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest3d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest3d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size); -@Namespace("at") public static native @ByVal Tensor upsample_nearest3d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest3d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest3d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... output_size); // aten::upsample_nearest3d(Tensor self, SymInt[3] output_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest3d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest3d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest3d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size); @@ -57096,9 +57050,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest3d_backward.grad_input(Tensor grad_output, SymInt[3] output_size, SymInt[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size); -@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... input_size); @@ -57108,7 +57062,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest3d_backward.grad_input(Tensor grad_output, SymInt[3] output_size, SymInt[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_nearest3d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_nearest3d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size); @@ -57117,14 +57071,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_nearest3d_backward(Tensor grad_output, SymInt[3] output_size, SymInt[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest3d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest3d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest3d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size); -@Namespace("at") public static native @ByVal Tensor upsample_nearest3d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest3d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest3d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... input_size); // aten::upsample_nearest3d_backward(Tensor grad_output, SymInt[3] output_size, SymInt[5] input_size, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_nearest3d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_nearest3d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_nearest3d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size); @@ -57166,9 +57120,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_trilinear3d.out(Tensor self, SymInt[3] output_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners); @@ -57178,7 +57132,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_trilinear3d.out(Tensor self, SymInt[3] output_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_symint_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners); @@ -57187,14 +57141,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_trilinear3d(Tensor self, SymInt[3] output_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_trilinear3d(@Const @ByRef Tensor self, @ByVal LongArrayRef output_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_trilinear3d(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @Cast("bool") boolean align_corners); // aten::upsample_trilinear3d(Tensor self, SymInt[3] output_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_symint(@Const @ByRef Tensor self, @ByVal SymIntArrayRef output_size, @Cast("bool") boolean align_corners); @@ -57226,9 +57180,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_trilinear3d_backward.grad_input(Tensor grad_output, SymInt[3] output_size, SymInt[5] input_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_backward_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners); @@ -57238,7 +57192,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_trilinear3d_backward.grad_input(Tensor grad_output, SymInt[3] output_size, SymInt[5] input_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None, *, Tensor(a!) grad_input) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByRef Tensor upsample_trilinear3d_backward_symint_out(@ByRef Tensor grad_input, @Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners); @@ -57247,14 +57201,14 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::upsample_trilinear3d_backward(Tensor grad_output, SymInt[3] output_size, SymInt[5] input_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_backward(@Const @ByRef Tensor grad_output, @ByVal LongArrayRef output_size, @ByVal LongArrayRef input_size, @Cast("bool") boolean align_corners); -@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_backward(@Const @ByRef Tensor grad_output, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] output_size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] input_size, @Cast("bool") boolean align_corners); // aten::upsample_trilinear3d_backward(Tensor grad_output, SymInt[3] output_size, SymInt[5] input_size, bool align_corners, float? scales_d=None, float? scales_h=None, float? scales_w=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "c10::optional(c10::nullopt)") DoubleOptional scales_w); +@Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_d, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_h, @ByVal(nullValue = "std::optional(::std::nullopt)") DoubleOptional scales_w); @Namespace("at") public static native @ByVal Tensor upsample_trilinear3d_backward_symint(@Const @ByRef Tensor grad_output, @ByVal SymIntArrayRef output_size, @ByVal SymIntArrayRef input_size, @Cast("bool") boolean align_corners); @@ -57385,7 +57339,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::vander(Tensor x, int? N=None, bool increasing=False) -> Tensor -@Namespace("at") public static native @ByVal Tensor vander(@Const @ByRef Tensor x, @ByVal(nullValue = "c10::optional(c10::nullopt)") LongOptional N, @Cast("bool") boolean increasing/*=false*/); +@Namespace("at") public static native @ByVal Tensor vander(@Const @ByRef Tensor x, @ByVal(nullValue = "std::optional(::std::nullopt)") LongOptional N, @Cast("bool") boolean increasing/*=false*/); @Namespace("at") public static native @ByVal Tensor vander(@Const @ByRef Tensor x); @@ -57425,9 +57379,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean unbiased); // aten::var.correction(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False) -> Tensor -@Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); // aten::var.out(Tensor self, int[1]? dim, bool unbiased=True, bool keepdim=False, *, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim/*=false*/); @@ -57439,9 +57393,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByRef Tensor var_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim, @ByRef Tensor out); // aten::var.correction_out(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); // aten::var.correction_out(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor var_outf(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Const @ByRef ScalarOptional correction, @Cast("bool") boolean keepdim, @ByRef Tensor out); @Namespace("at") public static native @ByRef Tensor var_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef ScalarOptional correction, @Cast("bool") boolean keepdim, @ByRef Tensor out); @@ -57462,15 +57416,15 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByRef Tensor var_outf(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim, @ByRef Tensor out); // aten::var.correction_names(Tensor self, Dimname[1] dim, *, Scalar? correction=None, bool keepdim=False) -> Tensor -@Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim); -@Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal Tensor var(@Const @ByRef Tensor self, @ByVal DimnameVector dim); // aten::var.correction_names_out(Tensor self, Dimname[1] dim, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameArrayRef dim); -@Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByRef Tensor var_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal DimnameVector dim); // aten::var.correction_names_out(Tensor self, Dimname[1] dim, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor var_outf(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef ScalarOptional correction, @Cast("bool") boolean keepdim, @ByRef Tensor out); @@ -57513,9 +57467,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Cast("bool") boolean unbiased); // aten::var_mean.correction(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False) -> (Tensor, Tensor) -@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); // aten::var_mean.names_dim(Tensor self, Dimname[1] dim, bool unbiased=True, bool keepdim=False) -> (Tensor, Tensor) @Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Cast("bool") boolean unbiased, @Cast("bool") boolean keepdim/*=false*/); @@ -57524,15 +57478,15 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Cast("bool") boolean unbiased); // aten::var_mean.correction_names(Tensor self, Dimname[1] dim, *, Scalar? correction=None, bool keepdim=False) -> (Tensor, Tensor) -@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal DimnameArrayRef dim); -@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal DimnameVector dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T var_mean(@Const @ByRef Tensor self, @ByVal DimnameVector dim); // aten::var_mean.correction_out(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out0, Tensor(b!) out1) -> (Tensor(a!), Tensor(b!)) -@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean_out(@ByRef Tensor out0, @ByRef Tensor out1, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean_out(@ByRef Tensor out0, @ByRef Tensor out1, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") LongArrayRefOptional dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); @Namespace("at") public static native @ByVal T_TensorTensor_T var_mean_out(@ByRef Tensor out0, @ByRef Tensor out1, @Const @ByRef Tensor self); -@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean_out(@ByRef Tensor out0, @ByRef Tensor out1, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); +@Namespace("at") public static native @ByVal T_TensorTensor_T var_mean_out(@ByRef Tensor out0, @ByRef Tensor out1, @Const @ByRef Tensor self, @ByVal(nullValue = "at::OptionalIntArrayRef(::std::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef(nullValue = "std::optional(::std::nullopt)") ScalarOptional correction, @Cast("bool") boolean keepdim/*=false*/); // aten::var_mean.correction_out(Tensor self, int[1]? dim=None, *, Scalar? correction=None, bool keepdim=False, Tensor(a!) out0, Tensor(b!) out1) -> (Tensor(a!), Tensor(b!)) @Namespace("at") public static native @ByVal T_TensorTensor_T var_mean_outf(@Const @ByRef Tensor self, @ByVal LongArrayRefOptional dim, @Const @ByRef ScalarOptional correction, @Cast("bool") boolean keepdim, @ByRef Tensor out0, @ByRef Tensor out1); @Namespace("at") public static native @ByVal T_TensorTensor_T var_mean_outf(@Const @ByRef Tensor self, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim, @Const @ByRef ScalarOptional correction, @Cast("bool") boolean keepdim, @ByRef Tensor out0, @ByRef Tensor out1); @@ -58177,13 +58131,13 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // aten::zeros_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor -@Namespace("at") public static native @ByVal Tensor zeros_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByVal Tensor zeros_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByVal Tensor zeros_like(@Const @ByRef Tensor self); // aten::zeros_like(Tensor self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor @Namespace("at") public static native @ByVal Tensor zeros_like(@Const @ByRef Tensor self, @ByVal ScalarTypeOptional dtype, @ByVal LayoutOptional layout, @ByVal DeviceOptional device, @ByVal BoolOptional pin_memory, @ByVal MemoryFormatOptional memory_format); // aten::zeros_like.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) -@Namespace("at") public static native @ByRef Tensor zeros_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("at") public static native @ByRef Tensor zeros_like_out(@ByRef Tensor out, @Const @ByRef Tensor self, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("at") public static native @ByRef Tensor zeros_like_out(@ByRef Tensor out, @Const @ByRef Tensor self); // aten::zeros_like.out(Tensor self, *, MemoryFormat? memory_format=None, Tensor(a!) out) -> Tensor(a!) @Namespace("at") public static native @ByRef Tensor zeros_like_outf(@Const @ByRef Tensor self, @ByVal MemoryFormatOptional memory_format, @ByRef Tensor out); @@ -58287,6 +58241,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include +// #include // #include // #include // #include @@ -58386,6 +58342,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -58408,6 +58365,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -58431,6 +58389,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -58463,6 +58422,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -58493,6 +58453,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -58511,6 +58472,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -58534,6 +58496,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -58547,7 +58510,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include +// #include +// #include // #include +// #include +// #include // #include // #include // #include @@ -58693,6 +58661,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -59304,6 +59273,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -59896,6 +59866,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include @Namespace("at::native") public static native @Cast("bool") boolean nested_tensor_impl_is_contiguous(@Const NestedTensorImpl nt); + + // Targeting ../NestedTensorImpl.java @@ -59984,8 +59956,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include -// #include -// #include // #include // #include @@ -60096,6 +60066,75 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // namespace torch::autograd +// Parsed from torch/csrc/utils/torch_dispatch_mode.h + +// #pragma once + +// #include +// Targeting ../StashTorchDispatchModeGuard.java + + +// Targeting ../StashTorchDispatchStackGuard.java + + + + // namespace torch::torch_dispatch_mode + + +// Parsed from torch/csrc/dynamo/compiled_autograd.h + +// #pragma once +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include +// #include + +// see [Note: Compiled Autograd] +// Targeting ../SizeInput.java + + +// Targeting ../CacheKeyBuffer.java + + +// Targeting ../CacheKey.java + + +// Targeting ../NodeCall.java + + +// Targeting ../NodeCalls.java + + +// Targeting ../DynamoTensorArg.java + + +// Targeting ../TensorArgs.java + + +// Targeting ../AutogradCompilerCall.java + + +// Targeting ../CompiledNodeArgs.java + + +// Targeting ../TraceState.java + + +// Targeting ../SwapSavedVariables.java + + + + // namespace torch::dynamo::autograd + + // Parsed from torch/csrc/autograd/custom_function.h // #pragma once @@ -60107,6 +60146,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include @@ -60127,6 +60167,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { /// /// /// +/// // Targeting ../FunctionCrossMapLRN2d.java @@ -61576,6 +61617,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include +// #include // #include // #include // #include @@ -61675,6 +61718,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -61697,6 +61741,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -61720,6 +61765,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -61752,6 +61798,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -61782,6 +61829,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -61800,6 +61848,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -61823,6 +61872,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -61836,7 +61886,12 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include +// #include +// #include // #include +// #include +// #include // #include // #include // #include @@ -61982,6 +62037,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -62593,6 +62649,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -62923,9 +62980,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Const @ByRef Device arg4, @Const @ByRef SymIntArrayRefOptional self_sizes); -@Namespace("at::indexing::impl") public static native @ByVal Tensor boolToIndexingTensorCPUOrCUDA( - @Const @ByRef Tensor self, - @Cast("bool") boolean value); +@Namespace("at::indexing::impl") public static native @ByVal Tensor boolToIndexingTensorCPUOrCUDA(@Const @ByRef Tensor self, @Cast("bool") boolean value); @Namespace("at::indexing::impl") public static native @ByVal Tensor boolToIndexingTensorNonNativeDeviceType( @Const @ByRef Tensor self, @@ -63332,12 +63387,6 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Cast("std::ostream*") @ByRef Pointer stream, @Cast("const torch::detail::TensorDataContainer*") @ByRef Pointer tensor_data_container); -// FIXME: There is no `operator<<` overload for `at::kBFloat16` type, -// and we need to convert it to `float` type using `operator float()` function -// defined in `c10/util/BFloat16.h`. -// Tracking issue: https://github.com/pytorch/pytorch/issues/28845 -@Namespace("torch::detail") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer stream, @ByVal BFloat16 value); - @Namespace("torch::detail") public static native ScalarType compute_desired_dtype(ScalarType scalar_type); // We use `TensorDataContainer` to support converting the following data @@ -63492,6 +63541,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include // #include +// #include // #include // #include // #include @@ -63597,7 +63647,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { * {@code TensorOptions} specify additional configuration options for the returned * tensor, such as what type to interpret the {@code data} as. */ -@Namespace("torch") public static native @ByVal Tensor _make_dep_token(@ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal Tensor _make_dep_token(@ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal Tensor _make_dep_token(); @Namespace("torch") public static native @ByVal @Name("_cudnn_init_dropout_state") Tensor torch__cudnn_init_dropout_state(double dropout, @Cast("bool") boolean train, @Cast("int64_t") long dropout_seed, @ByVal TensorOptions options); @Namespace("torch") public static native @ByVal @Name("arange") Tensor torch_arange(@Const @ByRef Scalar end, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @@ -63614,31 +63664,31 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("torch") public static native @ByVal @Name("blackman_window") Tensor torch_blackman_window(@Cast("int64_t") long window_length); @Namespace("torch") public static native @ByVal @Name("blackman_window") Tensor torch_blackman_window(@Cast("int64_t") long window_length, @Cast("bool") boolean periodic, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("blackman_window") Tensor torch_blackman_window(@Cast("int64_t") long window_length, @Cast("bool") boolean periodic); -@Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal LongArrayRef size, @ByVal DimnameListOptional names, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal LongArrayRef size, @ByVal DimnameListOptional names, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal LongArrayRef size, @ByVal DimnameListOptional names); -@Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal DimnameListOptional names, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal DimnameListOptional names, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal DimnameListOptional names); -@Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal LongArrayRef size); -@Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("empty") Tensor torch_empty(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); -@Namespace("torch") public static native @ByVal @Name("_empty_affine_quantized") Tensor torch__empty_affine_quantized(@ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, double scale/*=1*/, @Cast("int64_t") long zero_point/*=0*/, @ByVal(nullValue = "c10::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("_empty_affine_quantized") Tensor torch__empty_affine_quantized(@ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, double scale/*=1*/, @Cast("int64_t") long zero_point/*=0*/, @ByVal(nullValue = "std::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("_empty_affine_quantized") Tensor torch__empty_affine_quantized(@ByVal LongArrayRef size); -@Namespace("torch") public static native @ByVal @Name("_empty_affine_quantized") Tensor torch__empty_affine_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, double scale/*=1*/, @Cast("int64_t") long zero_point/*=0*/, @ByVal(nullValue = "c10::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("_empty_affine_quantized") Tensor torch__empty_affine_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, double scale/*=1*/, @Cast("int64_t") long zero_point/*=0*/, @ByVal(nullValue = "std::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("_empty_affine_quantized") Tensor torch__empty_affine_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); -@Namespace("torch") public static native @ByVal Tensor _empty_affine_quantized_symint(@ByVal SymIntArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, double scale/*=1*/, @Cast("int64_t") long zero_point/*=0*/, @ByVal(nullValue = "c10::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal Tensor _empty_affine_quantized_symint(@ByVal SymIntArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, double scale/*=1*/, @Cast("int64_t") long zero_point/*=0*/, @ByVal(nullValue = "std::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal Tensor _empty_affine_quantized_symint(@ByVal SymIntArrayRef size); -@Namespace("torch") public static native @ByVal @Name("_empty_per_channel_affine_quantized") Tensor torch__empty_per_channel_affine_quantized(@ByVal LongArrayRef size, @Const @ByRef Tensor scales, @Const @ByRef Tensor zero_points, @Cast("int64_t") long axis, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("_empty_per_channel_affine_quantized") Tensor torch__empty_per_channel_affine_quantized(@ByVal LongArrayRef size, @Const @ByRef Tensor scales, @Const @ByRef Tensor zero_points, @Cast("int64_t") long axis, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("_empty_per_channel_affine_quantized") Tensor torch__empty_per_channel_affine_quantized(@ByVal LongArrayRef size, @Const @ByRef Tensor scales, @Const @ByRef Tensor zero_points, @Cast("int64_t") long axis); -@Namespace("torch") public static native @ByVal @Name("_empty_per_channel_affine_quantized") Tensor torch__empty_per_channel_affine_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor scales, @Const @ByRef Tensor zero_points, @Cast("int64_t") long axis, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("_empty_per_channel_affine_quantized") Tensor torch__empty_per_channel_affine_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor scales, @Const @ByRef Tensor zero_points, @Cast("int64_t") long axis, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("_empty_per_channel_affine_quantized") Tensor torch__empty_per_channel_affine_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor scales, @Const @ByRef Tensor zero_points, @Cast("int64_t") long axis); -@Namespace("torch") public static native @ByVal Tensor _empty_per_channel_affine_quantized_symint(@ByVal SymIntArrayRef size, @Const @ByRef Tensor scales, @Const @ByRef Tensor zero_points, @Cast("int64_t") long axis, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal Tensor _empty_per_channel_affine_quantized_symint(@ByVal SymIntArrayRef size, @Const @ByRef Tensor scales, @Const @ByRef Tensor zero_points, @Cast("int64_t") long axis, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(c10::MemoryFormat::Contiguous)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal Tensor _empty_per_channel_affine_quantized_symint(@ByVal SymIntArrayRef size, @Const @ByRef Tensor scales, @Const @ByRef Tensor zero_points, @Cast("int64_t") long axis); -@Namespace("torch") public static native @ByVal @Name("empty_quantized") Tensor torch_empty_quantized(@ByVal LongArrayRef size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("empty_quantized") Tensor torch_empty_quantized(@ByVal LongArrayRef size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("empty_quantized") Tensor torch_empty_quantized(@ByVal LongArrayRef size, @Const @ByRef Tensor qtensor); -@Namespace("torch") public static native @ByVal @Name("empty_quantized") Tensor torch_empty_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("empty_quantized") Tensor torch_empty_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor qtensor, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("empty_quantized") Tensor torch_empty_quantized(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor qtensor); -@Namespace("torch") public static native @ByVal @Name("empty_like") Tensor torch_empty_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("empty_like") Tensor torch_empty_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("empty_like") Tensor torch_empty_like(@Const @ByRef Tensor self); @Namespace("torch") public static native @ByVal @Name("empty_strided") Tensor torch_empty_strided(@ByVal LongArrayRef size, @ByVal LongArrayRef stride, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("empty_strided") Tensor torch_empty_strided(@ByVal LongArrayRef size, @ByVal LongArrayRef stride); @@ -63656,11 +63706,11 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("torch") public static native @ByVal @Name("full") Tensor torch_full(@ByVal LongArrayRef size, @Const @ByRef Scalar fill_value); @Namespace("torch") public static native @ByVal @Name("full") Tensor torch_full(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Scalar fill_value, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("full") Tensor torch_full(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Scalar fill_value); -@Namespace("torch") public static native @ByVal @Name("full_like") Tensor torch_full_like(@Const @ByRef Tensor self, @Const @ByRef Scalar fill_value, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("full_like") Tensor torch_full_like(@Const @ByRef Tensor self, @Const @ByRef Scalar fill_value, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("full_like") Tensor torch_full_like(@Const @ByRef Tensor self, @Const @ByRef Scalar fill_value); -@Namespace("torch") public static native @ByVal @Name("from_file") Tensor torch_from_file(@StringView BytePointer filename, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional shared, @ByVal(nullValue = "c10::optional(0)") LongOptional size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); +@Namespace("torch") public static native @ByVal @Name("from_file") Tensor torch_from_file(@StringView BytePointer filename, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional shared, @ByVal(nullValue = "std::optional(0)") LongOptional size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("from_file") Tensor torch_from_file(@StringView BytePointer filename); -@Namespace("torch") public static native @ByVal @Name("from_file") Tensor torch_from_file(@StringView String filename, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional shared, @ByVal(nullValue = "c10::optional(0)") LongOptional size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); +@Namespace("torch") public static native @ByVal @Name("from_file") Tensor torch_from_file(@StringView String filename, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional shared, @ByVal(nullValue = "std::optional(0)") LongOptional size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("from_file") Tensor torch_from_file(@StringView String filename); @Namespace("torch") public static native @ByVal @Name("hann_window") Tensor torch_hann_window(@Cast("int64_t") long window_length, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("hann_window") Tensor torch_hann_window(@Cast("int64_t") long window_length); @@ -63704,7 +63754,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("torch") public static native @ByVal @Name("ones") Tensor torch_ones(@ByVal LongArrayRef size); @Namespace("torch") public static native @ByVal @Name("ones") Tensor torch_ones(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("ones") Tensor torch_ones(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); -@Namespace("torch") public static native @ByVal @Name("ones_like") Tensor torch_ones_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("ones_like") Tensor torch_ones_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("ones_like") Tensor torch_ones_like(@Const @ByRef Tensor self); @Namespace("torch") public static native @ByVal @Name("scalar_tensor") Tensor torch_scalar_tensor(@Const @ByRef Scalar s, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("scalar_tensor") Tensor torch_scalar_tensor(@Const @ByRef Scalar s); @@ -63724,7 +63774,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("torch") public static native @ByVal @Name("rand") Tensor torch_rand(@ByVal LongArrayRef size, @ByVal GeneratorOptional generator); @Namespace("torch") public static native @ByVal @Name("rand") Tensor torch_rand(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("rand") Tensor torch_rand(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal GeneratorOptional generator); -@Namespace("torch") public static native @ByVal @Name("rand_like") Tensor torch_rand_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("rand_like") Tensor torch_rand_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("rand_like") Tensor torch_rand_like(@Const @ByRef Tensor self); @Namespace("torch") public static native @ByVal @Name("randint") Tensor torch_randint(@Cast("int64_t") long high, @ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions(at::kLong)") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("randint") Tensor torch_randint(@Cast("int64_t") long high, @ByVal LongArrayRef size); @@ -63742,9 +63792,9 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("torch") public static native @ByVal @Name("randint") Tensor torch_randint(@Cast("int64_t") long low, @Cast("int64_t") long high, @ByVal LongArrayRef size, @ByVal GeneratorOptional generator); @Namespace("torch") public static native @ByVal @Name("randint") Tensor torch_randint(@Cast("int64_t") long low, @Cast("int64_t") long high, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions(at::kLong)") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("randint") Tensor torch_randint(@Cast("int64_t") long low, @Cast("int64_t") long high, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal GeneratorOptional generator); -@Namespace("torch") public static native @ByVal @Name("randint_like") Tensor torch_randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("randint_like") Tensor torch_randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("randint_like") Tensor torch_randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long high); -@Namespace("torch") public static native @ByVal @Name("randint_like") Tensor torch_randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long low, @Cast("int64_t") long high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("randint_like") Tensor torch_randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long low, @Cast("int64_t") long high, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("randint_like") Tensor torch_randint_like(@Const @ByRef Tensor self, @Cast("int64_t") long low, @Cast("int64_t") long high); @Namespace("torch") public static native @ByVal @Name("randn") Tensor torch_randn(@ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("randn") Tensor torch_randn(@ByVal LongArrayRef size); @@ -63762,7 +63812,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("torch") public static native @ByVal @Name("randn") Tensor torch_randn(@ByVal LongArrayRef size, @ByVal GeneratorOptional generator, @ByVal DimnameListOptional names); @Namespace("torch") public static native @ByVal @Name("randn") Tensor torch_randn(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal GeneratorOptional generator, @ByVal DimnameListOptional names, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("randn") Tensor torch_randn(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal GeneratorOptional generator, @ByVal DimnameListOptional names); -@Namespace("torch") public static native @ByVal @Name("randn_like") Tensor torch_randn_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("randn_like") Tensor torch_randn_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("randn_like") Tensor torch_randn_like(@Const @ByRef Tensor self); @Namespace("torch") public static native @ByVal @Name("randperm") Tensor torch_randperm(@Cast("int64_t") long n, @ByVal(nullValue = "at::TensorOptions(at::kLong)") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("randperm") Tensor torch_randperm(@Cast("int64_t") long n); @@ -63784,8 +63834,10 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("torch") public static native @ByVal @Name("zeros") Tensor torch_zeros(@ByVal LongArrayRef size); @Namespace("torch") public static native @ByVal @Name("zeros") Tensor torch_zeros(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("zeros") Tensor torch_zeros(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); -@Namespace("torch") public static native @ByVal @Name("zeros_like") Tensor torch_zeros_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("zeros_like") Tensor torch_zeros_like(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("zeros_like") Tensor torch_zeros_like(@Const @ByRef Tensor self); +@Namespace("torch") public static native @ByVal Tensor _sparse_compressed_tensor_with_dims(@Cast("int64_t") long nnz, @Cast("int64_t") long dense_dim, @ByVal LongArrayRef size, @ByVal LongArrayRef blocksize, ScalarType index_dtype, @ByVal TensorOptions options); +@Namespace("torch") public static native @ByVal Tensor _sparse_compressed_tensor_with_dims(@Cast("int64_t") long nnz, @Cast("int64_t") long dense_dim, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] blocksize, ScalarType index_dtype, @ByVal TensorOptions options); @Namespace("torch") public static native @ByVal @Name("sparse_compressed_tensor") Tensor torch_sparse_compressed_tensor(@Const @ByRef Tensor compressed_indices, @Const @ByRef Tensor plain_indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size, @ByVal TensorOptions options); @Namespace("torch") public static native @ByVal @Name("sparse_compressed_tensor") Tensor torch_sparse_compressed_tensor(@Const @ByRef Tensor compressed_indices, @Const @ByRef Tensor plain_indices, @Const @ByRef Tensor values, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal TensorOptions options); @Namespace("torch") public static native @ByVal @Name("sparse_csr_tensor") Tensor torch_sparse_csr_tensor(@Const @ByRef Tensor crow_indices, @Const @ByRef Tensor col_indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size, @ByVal TensorOptions options); @@ -63825,35 +63877,35 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("torch") public static native @ByVal @Name("_sparse_bsc_tensor_unsafe") Tensor torch__sparse_bsc_tensor_unsafe(@Const @ByRef Tensor ccol_indices, @Const @ByRef Tensor row_indices, @Const @ByRef Tensor values, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); @Namespace("torch") public static native @ByVal @Name("sparse_coo_tensor") Tensor torch_sparse_coo_tensor(@ByVal LongArrayRef size, @ByVal TensorOptions options); @Namespace("torch") public static native @ByVal @Name("sparse_coo_tensor") Tensor torch_sparse_coo_tensor(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal TensorOptions options); -@Namespace("torch") public static native @ByVal @Name("sparse_coo_tensor") Tensor torch_sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("torch") public static native @ByVal @Name("sparse_coo_tensor") Tensor torch_sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("torch") public static native @ByVal @Name("sparse_coo_tensor") Tensor torch_sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values); -@Namespace("torch") public static native @ByVal @Name("sparse_coo_tensor") Tensor torch_sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("torch") public static native @ByVal @Name("sparse_coo_tensor") Tensor torch_sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("torch") public static native @ByVal @Name("sparse_coo_tensor") Tensor torch_sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size); -@Namespace("torch") public static native @ByVal @Name("sparse_coo_tensor") Tensor torch_sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("torch") public static native @ByVal @Name("sparse_coo_tensor") Tensor torch_sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("torch") public static native @ByVal @Name("sparse_coo_tensor") Tensor torch_sparse_coo_tensor(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); -@Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_unsafe") Tensor torch__sparse_coo_tensor_unsafe(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_unsafe") Tensor torch__sparse_coo_tensor_unsafe(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_unsafe") Tensor torch__sparse_coo_tensor_unsafe(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal LongArrayRef size); -@Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_unsafe") Tensor torch__sparse_coo_tensor_unsafe(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_unsafe") Tensor torch__sparse_coo_tensor_unsafe(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_unsafe") Tensor torch__sparse_coo_tensor_unsafe(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); -@Namespace("torch") public static native @ByVal Tensor _sparse_coo_tensor_unsafe_symint(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal SymIntArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("torch") public static native @ByVal Tensor _sparse_coo_tensor_unsafe_symint(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal SymIntArrayRef size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("torch") public static native @ByVal Tensor _sparse_coo_tensor_unsafe_symint(@Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal SymIntArrayRef size); @Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_with_dims") Tensor torch__sparse_coo_tensor_with_dims(@Cast("int64_t") long sparse_dim, @Cast("int64_t") long dense_dim, @ByVal LongArrayRef size, @ByVal TensorOptions options); @Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_with_dims") Tensor torch__sparse_coo_tensor_with_dims(@Cast("int64_t") long sparse_dim, @Cast("int64_t") long dense_dim, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal TensorOptions options); -@Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_with_dims_and_tensors") Tensor torch__sparse_coo_tensor_with_dims_and_tensors(@Cast("int64_t") long sparse_dim, @Cast("int64_t") long dense_dim, @ByVal LongArrayRef size, @Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_with_dims_and_tensors") Tensor torch__sparse_coo_tensor_with_dims_and_tensors(@Cast("int64_t") long sparse_dim, @Cast("int64_t") long dense_dim, @ByVal LongArrayRef size, @Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_with_dims_and_tensors") Tensor torch__sparse_coo_tensor_with_dims_and_tensors(@Cast("int64_t") long sparse_dim, @Cast("int64_t") long dense_dim, @ByVal LongArrayRef size, @Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal TensorOptions options); -@Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_with_dims_and_tensors") Tensor torch__sparse_coo_tensor_with_dims_and_tensors(@Cast("int64_t") long sparse_dim, @Cast("int64_t") long dense_dim, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_with_dims_and_tensors") Tensor torch__sparse_coo_tensor_with_dims_and_tensors(@Cast("int64_t") long sparse_dim, @Cast("int64_t") long dense_dim, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("torch") public static native @ByVal @Name("_sparse_coo_tensor_with_dims_and_tensors") Tensor torch__sparse_coo_tensor_with_dims_and_tensors(@Cast("int64_t") long sparse_dim, @Cast("int64_t") long dense_dim, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal TensorOptions options); -@Namespace("torch") public static native @ByVal Tensor _sparse_coo_tensor_with_dims_and_tensors_symint(@Cast("int64_t") long sparse_dim, @Cast("int64_t") long dense_dim, @ByVal SymIntArrayRef size, @Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal TensorOptions options, @ByVal(nullValue = "c10::optional(c10::nullopt)") BoolOptional is_coalesced); +@Namespace("torch") public static native @ByVal Tensor _sparse_coo_tensor_with_dims_and_tensors_symint(@Cast("int64_t") long sparse_dim, @Cast("int64_t") long dense_dim, @ByVal SymIntArrayRef size, @Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal TensorOptions options, @ByVal(nullValue = "std::optional(::std::nullopt)") BoolOptional is_coalesced); @Namespace("torch") public static native @ByVal Tensor _sparse_coo_tensor_with_dims_and_tensors_symint(@Cast("int64_t") long sparse_dim, @Cast("int64_t") long dense_dim, @ByVal SymIntArrayRef size, @Const @ByRef Tensor indices, @Const @ByRef Tensor values, @ByVal TensorOptions options); -@Namespace("torch") public static native @ByVal @Name("_to_copy") Tensor torch__to_copy(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @Cast("bool") boolean non_blocking/*=false*/, @ByVal(nullValue = "c10::optional(c10::nullopt)") MemoryFormatOptional memory_format); +@Namespace("torch") public static native @ByVal @Name("_to_copy") Tensor torch__to_copy(@Const @ByRef Tensor self, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options, @Cast("bool") boolean non_blocking/*=false*/, @ByVal(nullValue = "std::optional(::std::nullopt)") MemoryFormatOptional memory_format); @Namespace("torch") public static native @ByVal @Name("_to_copy") Tensor torch__to_copy(@Const @ByRef Tensor self); @Namespace("torch") public static native @ByVal @Name("tril_indices") Tensor torch_tril_indices(@Cast("int64_t") long row, @Cast("int64_t") long col, @Cast("int64_t") long offset/*=0*/, @ByVal(nullValue = "at::TensorOptions(at::kLong)") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("tril_indices") Tensor torch_tril_indices(@Cast("int64_t") long row, @Cast("int64_t") long col); @Namespace("torch") public static native @ByVal @Name("triu_indices") Tensor torch_triu_indices(@Cast("int64_t") long row, @Cast("int64_t") long col, @Cast("int64_t") long offset/*=0*/, @ByVal(nullValue = "at::TensorOptions(at::kLong)") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("triu_indices") Tensor torch_triu_indices(@Cast("int64_t") long row, @Cast("int64_t") long col); -@Namespace("torch") public static native @ByVal @Name("normal") Tensor torch_normal(double mean, double std, @ByVal LongArrayRef size, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); +@Namespace("torch") public static native @ByVal @Name("normal") Tensor torch_normal(double mean, double std, @ByVal LongArrayRef size, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("normal") Tensor torch_normal(double mean, double std, @ByVal LongArrayRef size); -@Namespace("torch") public static native @ByVal @Name("normal") Tensor torch_normal(double mean, double std, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "c10::optional(c10::nullopt)") GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); +@Namespace("torch") public static native @ByVal @Name("normal") Tensor torch_normal(double mean, double std, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] size, @ByVal(nullValue = "std::optional(::std::nullopt)") GeneratorOptional generator, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("normal") Tensor torch_normal(double mean, double std, @ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... size); @Namespace("torch") public static native @ByVal @Name("fft_fftfreq") Tensor torch_fft_fftfreq(@Cast("int64_t") long n, double d/*=1.0*/, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options); @Namespace("torch") public static native @ByVal @Name("fft_fftfreq") Tensor torch_fft_fftfreq(@Cast("int64_t") long n); @@ -63872,9 +63924,22 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // #include // #include - -@Namespace("torch::jit") public static native @ByVal FunctionSchema parseSchema(@StdString BytePointer schema); -@Namespace("torch::jit") public static native @ByVal FunctionSchema parseSchema(@StdString String schema); +// allow_typevars: If true, we assume that lowercase types that we don't +// understand are type variables. This is only needed for TorchScript (and not +// not needed for custom ops). +// If false, we disallow typevars, except in certain cases for BC reason (i.e. +// your op is in the aten or prim namespace). + +@Namespace("torch::jit") public static native @ByVal FunctionSchema parseSchema( + @StdString BytePointer schema, + @Cast("bool") boolean allow_typevars/*=true*/); +@Namespace("torch::jit") public static native @ByVal FunctionSchema parseSchema( + @StdString BytePointer schema); +@Namespace("torch::jit") public static native @ByVal FunctionSchema parseSchema( + @StdString String schema, + @Cast("bool") boolean allow_typevars/*=true*/); +@Namespace("torch::jit") public static native @ByVal FunctionSchema parseSchema( + @StdString String schema); @Namespace("torch::jit") public static native @ByVal OperatorName parseName(@StdString BytePointer name); @Namespace("torch::jit") public static native @ByVal OperatorName parseName(@StdString String name); @@ -64781,8 +64846,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("torch::jit") public static native Value insertConstant( @ByRef Graph g, @Const @ByRef IValue val, - @ByVal(nullValue = "c10::optional(c10::nullopt)") SourceRangeOptional loc, - @ByVal(nullValue = "c10::optional(c10::nullopt)") @Cast("c10::optional*") ScopeOptional scope); + @ByVal(nullValue = "std::optional(c10::nullopt)") SourceRangeOptional loc, + @ByVal(nullValue = "std::optional(c10::nullopt)") @Cast("std::optional*") ScopeOptional scope); @Namespace("torch::jit") public static native Value insertConstant( @ByRef Graph g, @Const @ByRef IValue val); @@ -64796,8 +64861,8 @@ public class torch extends org.bytedeco.pytorch.presets.torch { @Namespace("torch::jit") public static native @ByVal ValueOptional tryInsertConstant( @ByRef Graph g, @Const @ByRef IValue val, - @ByVal(nullValue = "c10::optional(c10::nullopt)") SourceRangeOptional loc, - @ByVal(nullValue = "c10::optional(c10::nullopt)") @Cast("c10::optional*") ScopeOptional scope); + @ByVal(nullValue = "std::optional(c10::nullopt)") SourceRangeOptional loc, + @ByVal(nullValue = "std::optional(c10::nullopt)") @Cast("std::optional*") ScopeOptional scope); @Namespace("torch::jit") public static native @ByVal ValueOptional tryInsertConstant( @ByRef Graph g, @Const @ByRef IValue val); @@ -64926,8 +64991,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // Targeting ../SchemaInfo.java - // namespace utils - // namespace torch + // namespace torch::utils // Parsed from ATen/core/enum_type.h @@ -65630,7 +65694,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch { // details. @Namespace("torch::jit") public static native @ByVal JitModule freeze( @Const @ByRef JitModule module, - @Const @ByRef(nullValue = "c10::optional >(c10::nullopt)") StringVectorOptional preserved_attrs, + @Const @ByRef(nullValue = "std::optional >(c10::nullopt)") StringVectorOptional preserved_attrs, @Cast("bool") boolean optimize_numerics/*=true*/); @Namespace("torch::jit") public static native @ByVal JitModule freeze( @Const @ByRef JitModule module); @@ -65945,7 +66009,7 @@ The list of (type, depth) pairs controls the type of specializations and the num * torch::save(tensor_vec, stream); * \endrst */ -@Namespace("torch") public static native @Cast("char*") @StdVector BytePointer pickle_save(@Const @ByRef IValue ivalue); +@Namespace("torch") public static native @ByVal @Cast("std::vector*") ByteVector pickle_save(@Const @ByRef IValue ivalue); /// /// @@ -65953,9 +66017,7 @@ The list of (type, depth) pairs controls the type of specializations and the num /// /// /// -@Namespace("torch") public static native @ByVal IValue pickle_load(@Cast("char*") @StdVector BytePointer data); -@Namespace("torch") public static native @ByVal IValue pickle_load(@Cast("char*") @StdVector ByteBuffer data); -@Namespace("torch") public static native @ByVal IValue pickle_load(@Cast("char*") @StdVector byte[] data); +@Namespace("torch") public static native @ByVal IValue pickle_load(@Cast("const std::vector*") @ByRef ByteVector data); /** Deserializes the given {@code value}. * There must be an overload of {@code operator>>} between {@code serialize::InputArchive} @@ -66508,7 +66570,7 @@ The list of (type, depth) pairs controls the type of specializations and the num // #include /** Computes the 1 dimensional fast Fourier transform over a given dimension. - * See https://pytorch.org/docs/master/fft.html#torch.fft.fft. + * See https://pytorch.org/docs/main/fft.html#torch.fft.fft. * * Example: *

{@code
@@ -66519,14 +66581,14 @@ The list of (type, depth) pairs controls the type of specializations and the num
 ///
 @Namespace("torch::fft") public static native @ByVal Tensor fft(
     @Const @ByRef Tensor self,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional n,
     @Cast("int64_t") long dim/*=-1*/,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor fft(
     @Const @ByRef Tensor self);
 
 /** Computes the 1 dimensional inverse Fourier transform over a given dimension.
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.ifft.
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.ifft.
  * 
  *  Example:
  *  
{@code
@@ -66537,14 +66599,14 @@ The list of (type, depth) pairs controls the type of specializations and the num
 ///
 @Namespace("torch::fft") public static native @ByVal Tensor ifft(
     @Const @ByRef Tensor self,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional n,
     @Cast("int64_t") long dim/*=-1*/,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ifft(
     @Const @ByRef Tensor self);
 
 /** Computes the 2-dimensional fast Fourier transform over the given dimensions.
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.fft2.
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.fft2.
  * 
  *  Example:
  *  
{@code
@@ -66555,29 +66617,29 @@ The list of (type, depth) pairs controls the type of specializations and the num
 ///
 @Namespace("torch::fft") public static native @ByVal Tensor fft2(
     @Const @ByRef Tensor self,
-    @ByVal(nullValue = "c10::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
+    @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor fft2(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor fft2(
     @Const @ByRef Tensor self,
-    @ByVal(nullValue = "c10::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
+    @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor fft2(
     @Const @ByRef Tensor self,
-    @ByVal(nullValue = "c10::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
+    @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor fft2(
     @Const @ByRef Tensor self,
-    @ByVal(nullValue = "c10::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
+    @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the inverse of torch.fft.fft2
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.ifft2.
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.ifft2.
  * 
  *  Example:
  *  
{@code
@@ -66590,27 +66652,27 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ifft2(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor ifft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ifft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ifft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the N dimensional fast Fourier transform over given dimensions.
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.fftn.
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.fftn.
  * 
  *  Example:
  *  
{@code
@@ -66623,17 +66685,17 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor fftn(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor fftn(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the N dimensional fast Fourier transform over given dimensions.
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.ifftn.
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.ifftn.
  * 
  *  Example:
  *  
{@code
@@ -66646,17 +66708,17 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ifftn(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor ifftn(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the 1 dimensional FFT of real input with onesided Hermitian output.
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.rfft.
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.rfft.
  * 
  *  Example:
  *  
{@code
@@ -66669,16 +66731,16 @@ The list of (type, depth) pairs controls the type of specializations and the num
 ///
 @Namespace("torch::fft") public static native @ByVal Tensor rfft(
     @Const @ByRef Tensor self,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional n,
     @Cast("int64_t") long dim/*=-1*/,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor rfft(
     @Const @ByRef Tensor self);
 
 /** Computes the inverse of torch.fft.rfft
  * 
  *  The input is a onesided Hermitian Fourier domain signal, with real-valued
- *  output. See https://pytorch.org/docs/master/fft.html#torch.fft.irfft
+ *  output. See https://pytorch.org/docs/main/fft.html#torch.fft.irfft
  * 
  *  Example:
  *  
{@code
@@ -66690,14 +66752,14 @@ The list of (type, depth) pairs controls the type of specializations and the num
 ///
 @Namespace("torch::fft") public static native @ByVal Tensor irfft(
     @Const @ByRef Tensor self,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional n,
     @Cast("int64_t") long dim/*=-1*/,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor irfft(
     @Const @ByRef Tensor self);
 
 /** Computes the 2-dimensional FFT of real input. Returns a onesided Hermitian
- *  output. See https://pytorch.org/docs/master/fft.html#torch.fft.rfft2
+ *  output. See https://pytorch.org/docs/main/fft.html#torch.fft.rfft2
  * 
  *  Example:
  *  
{@code
@@ -66710,27 +66772,27 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor rfft2(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor rfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor rfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor rfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the inverse of torch.fft.rfft2.
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.irfft2.
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.irfft2.
  * 
  *  Example:
  *  
{@code
@@ -66743,27 +66805,27 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor irfft2(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor irfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor irfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor irfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the N dimensional FFT of real input with onesided Hermitian output.
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.rfftn
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.rfftn
  * 
  *  Example:
  *  
{@code
@@ -66776,17 +66838,17 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor rfftn(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor rfftn(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the inverse of torch.fft.rfftn.
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.irfftn.
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.irfftn.
  * 
  *  Example:
  *  
{@code
@@ -66800,20 +66862,20 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor irfftn(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor irfftn(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the 1 dimensional FFT of a onesided Hermitian signal
  * 
  *  The input represents a Hermitian symmetric time domain signal. The returned
  *  Fourier domain representation of such a signal is a real-valued. See
- *  https://pytorch.org/docs/master/fft.html#torch.fft.hfft
+ *  https://pytorch.org/docs/main/fft.html#torch.fft.hfft
  * 
  *  Example:
  *  
{@code
@@ -66826,16 +66888,16 @@ The list of (type, depth) pairs controls the type of specializations and the num
 ///
 @Namespace("torch::fft") public static native @ByVal Tensor hfft(
     @Const @ByRef Tensor self,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional n,
     @Cast("int64_t") long dim/*=-1*/,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor hfft(
     @Const @ByRef Tensor self);
 
 /** Computes the inverse FFT of a real-valued Fourier domain signal.
  * 
  *  The output is a onesided representation of the Hermitian symmetric time
- *  domain signal. See https://pytorch.org/docs/master/fft.html#torch.fft.ihfft.
+ *  domain signal. See https://pytorch.org/docs/main/fft.html#torch.fft.ihfft.
  * 
  *  Example:
  *  
{@code
@@ -66848,16 +66910,16 @@ The list of (type, depth) pairs controls the type of specializations and the num
 ///
 @Namespace("torch::fft") public static native @ByVal Tensor ihfft(
     @Const @ByRef Tensor self,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") SymIntOptional n,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") SymIntOptional n,
     @Cast("int64_t") long dim/*=-1*/,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ihfft(
     @Const @ByRef Tensor self);
 
 /** Computes the 2-dimensional FFT of a Hermitian symmetric input signal.
  * 
  *  The input is a onesided representation of the Hermitian symmetric time
- *  domain signal. See https://pytorch.org/docs/master/fft.html#torch.fft.hfft2.
+ *  domain signal. See https://pytorch.org/docs/main/fft.html#torch.fft.hfft2.
  * 
  *  Example:
  *  
{@code
@@ -66872,30 +66934,30 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor hfft2(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor hfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor hfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor hfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the 2-dimensional IFFT of a real input signal.
  * 
  *  The output is a onesided representation of the Hermitian symmetric time
  *  domain signal. See
- *  https://pytorch.org/docs/master/fft.html#torch.fft.ihfft2.
+ *  https://pytorch.org/docs/main/fft.html#torch.fft.ihfft2.
  * 
  *  Example:
  *  
{@code
@@ -66910,29 +66972,29 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ihfft2(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor ihfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ihfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ihfft2(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the N-dimensional FFT of a Hermitian symmetric input signal.
  * 
  *  The input is a onesided representation of the Hermitian symmetric time
- *  domain signal. See https://pytorch.org/docs/master/fft.html#torch.fft.hfftn.
+ *  domain signal. See https://pytorch.org/docs/main/fft.html#torch.fft.hfftn.
  * 
  *  Example:
  *  
{@code
@@ -66947,30 +67009,30 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor hfftn(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor hfftn(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor hfftn(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor hfftn(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the N-dimensional IFFT of a real input signal.
  * 
  *  The output is a onesided representation of the Hermitian symmetric time
  *  domain signal. See
- *  https://pytorch.org/docs/master/fft.html#torch.fft.ihfftn.
+ *  https://pytorch.org/docs/main/fft.html#torch.fft.ihfftn.
  * 
  *  Example:
  *  
{@code
@@ -66985,29 +67047,29 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ihfftn(
     @Const @ByRef Tensor self);
 @Namespace("torch::fft") public static native @ByVal Tensor ihfftn(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ihfftn(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") LongArrayRefOptional s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 @Namespace("torch::fft") public static native @ByVal Tensor ihfftn(
     @Const @ByRef Tensor self,
     @ByVal(nullValue = "at::OptionalIntArrayRef(c10::nullopt)") @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector long[] s,
     @ByVal(nullValue = "torch::IntArrayRef({-2, -1})") LongArrayRef dim,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") StringViewOptional norm);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") StringViewOptional norm);
 
 /** Computes the discrete Fourier Transform sample frequencies for a signal of
  *  size n.
  * 
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.fftfreq
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.fftfreq
  * 
  *  Example:
  *  
{@code
@@ -67025,7 +67087,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 /** Computes the sample frequencies for torch.fft.rfft with a signal of size n.
  * 
  *  Like torch.fft.rfft, only the positive frequencies are included.
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.rfftfreq
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.rfftfreq
  * 
  *  Example:
  *  
{@code
@@ -67041,7 +67103,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 /** Reorders n-dimensional FFT output to have negative frequency terms first, by
  *  a torch.roll operation.
  * 
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.fftshift
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.fftshift
  * 
  *  Example:
  *  
{@code
@@ -67062,7 +67124,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Inverse of torch.fft.fftshift
  * 
- *  See https://pytorch.org/docs/master/fft.html#torch.fft.ifftshift
+ *  See https://pytorch.org/docs/main/fft.html#torch.fft.ifftshift
  * 
  *  Example:
  *  
{@code
@@ -67115,8 +67177,8 @@ The list of (type, depth) pairs controls the type of specializations and the num
  *    )JIT");
  *    IValue output = module->run_method("relu_script", a, b);
  *  \endrst */
-@Namespace("torch::jit") public static native @SharedPtr CompilationUnit compile(@StdString BytePointer source);
-@Namespace("torch::jit") public static native @SharedPtr CompilationUnit compile(@StdString String source);
+@Namespace("torch::jit") public static native @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit compile(@StdString BytePointer source);
+@Namespace("torch::jit") public static native @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit compile(@StdString String source);
 
  // namespace jit
  // namespace torch
@@ -67133,7 +67195,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Cholesky decomposition
 /**
-/** See https://pytorch.org/docs/master/linalg.html#torch.linalg.cholesky
+/** See https://pytorch.org/docs/main/linalg.html#torch.linalg.cholesky
 /**
 /** Example:
 /** 
{@code
@@ -67149,12 +67211,12 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes the sign and (natural) logarithm of the determinant
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.slogdet */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.slogdet */
 
 /** Computes eigenvalues and eigenvectors of non-symmetric/non-hermitian
  *  matrices
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.eig */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.eig */
 @Namespace("torch::linalg") public static native @ByVal T_TensorTensor_T eig(@Const @ByRef Tensor self);
 
 
@@ -67166,7 +67228,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes eigenvalues of non-symmetric/non-hermitian matrices
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.eigvals */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.eigvals */
 @Namespace("torch::linalg") public static native @ByVal Tensor eigvals(@Const @ByRef Tensor self);
 
 
@@ -67175,7 +67237,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes eigenvalues and eigenvectors
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.eigh */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.eigh */
 @Namespace("torch::linalg") public static native @ByVal T_TensorTensor_T eigh(
     @Const @ByRef Tensor self,
     @StringView BytePointer uplo);
@@ -67198,7 +67260,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes eigenvalues
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.eigvalsh */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.eigvalsh */
 @Namespace("torch::linalg") public static native @ByVal Tensor eigvalsh(@Const @ByRef Tensor self, @StringView BytePointer uplo);
 @Namespace("torch::linalg") public static native @ByVal Tensor eigvalsh(@Const @ByRef Tensor self, @StringView String uplo);
 
@@ -67216,7 +67278,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 /** Computes the product of Householder matrices
  * 
  *  See
- *  https://pytorch.org/docs/master/linalg.html#torch.linalg.householder_product */
+ *  https://pytorch.org/docs/main/linalg.html#torch.linalg.householder_product */
 @Namespace("torch::linalg") public static native @ByVal Tensor householder_product(@Const @ByRef Tensor input, @Const @ByRef Tensor tau);
 
 @Namespace("torch::linalg") public static native @ByRef Tensor householder_product_out(
@@ -67234,7 +67296,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes the matrix exponential
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.matrix_exp */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.matrix_exp */
 
 // C10_DEPRECATED_MESSAGE("linalg_norm is deprecated, use norm instead.")
 
@@ -67248,7 +67310,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes the LU factorization with partial pivoting
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.lu_factor */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.lu_factor */
 @Namespace("torch::linalg") public static native @ByVal T_TensorTensor_T lu_factor(
     @Const @ByRef Tensor input,
     @Cast("const bool") boolean pivot/*=true*/);
@@ -67269,7 +67331,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes the LU factorization with partial pivoting
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.lu */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.lu */
 @Namespace("torch::linalg") public static native @ByVal T_TensorTensorTensor_T lu(
     @Const @ByRef Tensor input,
     @Cast("const bool") boolean pivot/*=true*/);
@@ -67344,7 +67406,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Cast("bool") boolean keepdim,
     @ByVal ScalarTypeOptional opt_dtype);
 
-/** See https://pytorch.org/docs/master/linalg.html#torch.linalg.vector_norm */
+/** See https://pytorch.org/docs/main/linalg.html#torch.linalg.vector_norm */
 @Namespace("torch::linalg") public static native @ByVal Tensor vector_norm(
     @Const @ByRef Tensor self,
     @ByVal Scalar ord,
@@ -67373,7 +67435,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Cast("bool") boolean keepdim,
     @ByVal ScalarTypeOptional opt_dtype);
 
-/** See https://pytorch.org/docs/master/linalg.html#torch.linalg.matrix_norm */
+/** See https://pytorch.org/docs/main/linalg.html#torch.linalg.matrix_norm */
 @Namespace("torch::linalg") public static native @ByVal Tensor matrix_norm(
     @Const @ByRef Tensor self,
     @Const @ByRef Scalar ord,
@@ -67456,11 +67518,11 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @ByVal ScalarTypeOptional dtype,
     @ByRef Tensor result);
 
-/** See https://pytorch.org/docs/master/linalg.html#torch.linalg.matrix_power */
+/** See https://pytorch.org/docs/main/linalg.html#torch.linalg.matrix_power */
 
 @Namespace("torch::linalg") public static native @ByRef Tensor matrix_power_out(@Const @ByRef Tensor self, @Cast("int64_t") long n, @ByRef Tensor result);
 
-/** See https://pytorch.org/docs/master/linalg.html#torch.linalg.matrix_rank */
+/** See https://pytorch.org/docs/main/linalg.html#torch.linalg.matrix_rank */
 @Namespace("torch::linalg") public static native @ByVal Tensor matrix_rank(@Const @ByRef Tensor input, double tol, @Cast("bool") boolean hermitian);
 
 @Namespace("torch::linalg") public static native @ByVal Tensor matrix_rank(
@@ -67506,7 +67568,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
     @Const @ByRef TensorOptional rtol,
     @Cast("bool") boolean hermitian);
 
-/** See https://pytorch.org/docs/master/linalg.html#torch.linalg.multi_dot */
+/** See https://pytorch.org/docs/main/linalg.html#torch.linalg.multi_dot */
 @Namespace("torch::linalg") public static native @ByVal Tensor multi_dot(@ByVal TensorArrayRef tensors);
 @Namespace("torch::linalg") public static native @ByVal Tensor multi_dot(@ByVal TensorVector tensors);
 
@@ -67517,7 +67579,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes the pseudo-inverse
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.pinv */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.pinv */
 @Namespace("torch::linalg") public static native @ByVal Tensor pinv(
     @Const @ByRef Tensor input,
     double rcond/*=1e-15*/,
@@ -67538,7 +67600,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes the QR decomposition
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.qr */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.qr */
 @Namespace("torch::linalg") public static native @ByVal T_TensorTensor_T qr(
     @Const @ByRef Tensor input,
     @StringView BytePointer mode/*="reduced"*/);
@@ -67561,7 +67623,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes the LDL decomposition
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.ldl_factor_ex */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.ldl_factor_ex */
 @Namespace("torch::linalg") public static native @ByVal T_TensorTensorTensor_T ldl_factor_ex(
     @Const @ByRef Tensor input,
     @Cast("bool") boolean hermitian,
@@ -67579,7 +67641,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Solve a system of linear equations using the LDL decomposition
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.ldl_solve */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.ldl_solve */
 @Namespace("torch::linalg") public static native @ByVal Tensor ldl_solve(
     @Const @ByRef Tensor LD,
     @Const @ByRef Tensor pivots,
@@ -67597,7 +67659,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Solves a system linear system AX = B
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.solve_ex */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.solve_ex */
 @Namespace("torch::linalg") public static native @ByVal T_TensorTensor_T solve_ex(
     @Const @ByRef Tensor input,
     @Const @ByRef Tensor other,
@@ -67616,7 +67678,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes a tensor {@code x} such that {@code matmul(input, x) = other}.
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.solve */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.solve */
 @Namespace("torch::linalg") public static native @ByVal Tensor solve(@Const @ByRef Tensor input, @Const @ByRef Tensor other, @Cast("bool") boolean left);
 
 
@@ -67632,7 +67694,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
  *  the diagonal
  * 
  *  See
- *  https://pytorch.org/docs/master/linalg.html#torch.linalg.solve_triangular */
+ *  https://pytorch.org/docs/main/linalg.html#torch.linalg.solve_triangular */
 @Namespace("torch::linalg") public static native @ByVal Tensor solve_triangular(
     @Const @ByRef Tensor input,
     @Const @ByRef Tensor other,
@@ -67652,7 +67714,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes the singular values and singular vectors
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.svd */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.svd */
 @Namespace("torch::linalg") public static native @ByVal T_TensorTensorTensor_T svd(
     @Const @ByRef Tensor input,
     @Cast("bool") boolean full_matrices,
@@ -67670,7 +67732,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes the singular values
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.svdvals */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.svdvals */
 @Namespace("torch::linalg") public static native @ByVal Tensor svdvals(
     @Const @ByRef Tensor input,
     @ByVal StringViewOptional driver);
@@ -67685,7 +67747,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes the inverse of a tensor
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.tensorinv
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.tensorinv
  * 
  *  Example:
  *  
{@code
@@ -67702,7 +67764,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 
 /** Computes a tensor {@code x} such that {@code tensordot(input, x, dims=x.dim()) = other}.
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.tensorsolve
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.tensorsolve
  * 
  *  Example:
  *  
{@code
@@ -67735,7 +67797,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 /** Computes a tensor {@code inverse_input} such that {@code dot(input, inverse_input) =
  *  eye(input.size(0))}.
  * 
- *  See https://pytorch.org/docs/master/linalg.html#torch.linalg.inv */
+ *  See https://pytorch.org/docs/main/linalg.html#torch.linalg.inv */
 @Namespace("torch::linalg") public static native @ByVal Tensor inv(@Const @ByRef Tensor input);
 
 @Namespace("torch::linalg") public static native @ByRef Tensor inv_out(@ByRef Tensor result, @Const @ByRef Tensor input);
@@ -67794,7 +67856,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 /** Nested tensor
  * 
  *  See
- *  https://pytorch.org/docs/master/nested.html#torch.nested.nested_tensor
+ *  https://pytorch.org/docs/main/nested.html#torch.nested.nested_tensor
  * 
  *  
{@code */
 // implemented on python object to allow torch.nested.nested_tensor to be
@@ -67826,7 +67888,7 @@ The list of (type, depth) pairs controls the type of specializations and the num
 /** As Nested Tensor
  * 
  *  See
- *  https://pytorch.org/docs/master/nested.html#torch.nested.as_nested_tensor
+ *  https://pytorch.org/docs/main/nested.html#torch.nested.as_nested_tensor
  * 
  *  
{@code */
 
@@ -67834,21 +67896,21 @@ The list of (type, depth) pairs controls the type of specializations and the num
 ///
 @Namespace("torch::nested") public static native @ByVal Tensor as_nested_tensor(
     @ByVal TensorArrayRef list,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") ScalarTypeOptional dtype,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device);
 @Namespace("torch::nested") public static native @ByVal Tensor as_nested_tensor(
     @ByVal TensorArrayRef list);
 @Namespace("torch::nested") public static native @ByVal Tensor as_nested_tensor(
     @ByVal TensorVector list,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") ScalarTypeOptional dtype,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") ScalarTypeOptional dtype,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device);
 @Namespace("torch::nested") public static native @ByVal Tensor as_nested_tensor(
     @ByVal TensorVector list);
 
 /** Nested to padded tensor
  * 
  *  See
- *  https://pytorch.org/docs/master/nested.html#torch.nested.to_padded_tensor
+ *  https://pytorch.org/docs/main/nested.html#torch.nested.to_padded_tensor
  * 
  *  
{@code */
 @Namespace("torch::nested") public static native @ByVal Tensor to_padded_tensor(
@@ -68107,7 +68169,6 @@ The list of (type, depth) pairs controls the type of specializations and the num
 // #define AT_BUILD_WITH_LAPACK() 1
 public static final int AT_PARALLEL_OPENMP = 1;
 public static final int AT_PARALLEL_NATIVE = 0;
-public static final int AT_PARALLEL_NATIVE_TBB = 0;
 // #define AT_BLAS_F2C() 0
 // #define AT_BLAS_USE_CBLAS_DOT() 1
 
@@ -68241,8 +68302,6 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #include  // IWYU pragma: keep
 // #elif AT_PARALLEL_NATIVE
 // #include  // IWYU pragma: keep
-// #elif AT_PARALLEL_NATIVE_TBB
-// #include  // IWYU pragma: keep
 // #endif
 
 // #include  // IWYU pragma: keep
@@ -68265,7 +68324,8 @@ scalar_t sf(scalar_t x, scalar_t y)
   XPU(1), // XPU kernels, runtime
   CUDA(2), // CUDA kernels, runtime
   MTIA(3), // MTIA kernels, runtime
-  NUM_KINETO_ACTIVITIES(4);// must be the last one
+  PrivateUse1(4), // PrivateUse1 kernels, runtime
+  NUM_KINETO_ACTIVITIES(5);// must be the last one
 
     public final int value;
     private ActivityType(int v) { this.value = v; }
@@ -68280,11 +68340,12 @@ scalar_t sf(scalar_t x, scalar_t y)
   CUDA(2), // CPU + CUDA events
   NVTX(3), // only emit NVTX markers
   ITT(4), // only emit ITT markers
-  KINETO(5), // use libkineto
-  KINETO_GPU_FALLBACK(6), // use CUDA events when CUPTI is not available
-  KINETO_PRIVATEUSE1_FALLBACK(7), // use PrivateUse1 events
-  KINETO_ONDEMAND(8), // run the profiler in on-demand mode
-  NUM_PROFILER_STATES(9);// must be the last one
+  PRIVATEUSE1(5), // only emit PRIVATEUSE1 markers
+  KINETO(6), // use libkineto
+  KINETO_GPU_FALLBACK(7), // use CUDA events when CUPTI is not available
+  KINETO_PRIVATEUSE1_FALLBACK(8), // use PrivateUse1 events
+  KINETO_ONDEMAND(9), // run the profiler in on-demand mode
+  NUM_PROFILER_STATES(10);// must be the last one
 
     public final int value;
     private ProfilerState(int v) { this.value = v; }
@@ -68298,7 +68359,8 @@ scalar_t sf(scalar_t x, scalar_t y)
   LEGACY(1),
   KINETO(2),
   NVTX(3),
-  ITT(4);
+  ITT(4),
+  PRIVATEUSE1(5);
 
     public final int value;
     private ActiveProfilerType(int v) { this.value = v; }
@@ -68335,9 +68397,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 // There are some components which use these symbols. Until we migrate them
 // we have to mirror them in the old autograd namespace.
- // namespace profiler
- // namespace autograd
- // namespace torch
+ // namespace torch::autograd::profiler
 
 
 // Parsed from torch/csrc/profiler/events.h
@@ -68353,8 +68413,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /* Standard list of performance events independent of hardware or backend */
 @Namespace("torch::profiler") @MemberGetter public static native @Const @ByRef PointerPointer ProfilerPerfEvents();
- // namespace profiler
- // namespace torch
+ // namespace torch::profiler
 
 
 // Parsed from torch/csrc/profiler/stubs/base.h
@@ -68408,7 +68467,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 //       if (torch::profiler::impl::softAssertRaises()) {
 //         TORCH_INTERNAL_ASSERT(cond, __VA_ARGS__);
 //       } else {
-//         TORCH_WARN(__VA_ARGS__);
+//         TORCH_WARN_ONCE(__VA_ARGS__);
 //       }
 //       return false;
 //     }
@@ -68472,6 +68531,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::profiler::impl") public static native @ByVal StringVector inputTypes(@Const @ByRef RecordFunction fn);
 
 @Namespace("torch::profiler::impl") public static native @ByVal StringIValueMap saveExtraArgs(@Const @ByRef RecordFunction fn);
+@Namespace("torch::profiler::impl") public static native @ByVal ExtraFilesMap saveNcclMeta(@Const @ByRef RecordFunction fn, @Cast("bool") boolean truncate/*=true*/);
 @Namespace("torch::profiler::impl") public static native @ByVal ExtraFilesMap saveNcclMeta(@Const @ByRef RecordFunction fn);
 
 @Namespace("torch::profiler::impl") public static native @Cast("uint64_t") long computeFlops(
@@ -68483,9 +68543,10 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 
 
- // namespace impl
- // namespace profiler
- // namespace torch
+// #ifdef USE_DISTRIBUTED
+// #endif // USE_DISTRIBUTED
+
+ // namespace torch::profiler::impl
 
 
 // Parsed from torch/csrc/autograd/profiler_kineto.h
@@ -69157,7 +69218,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.batch_norm
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.batch_norm
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::BatchNormFuncOptions}
@@ -69376,7 +69437,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.conv1d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.conv1d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::Conv1dFuncOptions} class
@@ -69399,7 +69460,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.conv2d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.conv2d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::Conv2dFuncOptions} class
@@ -69422,7 +69483,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.conv3d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.conv3d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::Conv3dFuncOptions} class
@@ -69447,7 +69508,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.conv_transpose1d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.conv_transpose1d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -69471,7 +69532,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.conv_transpose2d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.conv_transpose2d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -69495,7 +69556,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.conv_transpose3d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.conv_transpose3d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -69568,7 +69629,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.cosine_similarity
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.cosine_similarity
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -69595,7 +69656,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.pairwise_distance
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.pairwise_distance
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -69714,7 +69775,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.dropout
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.dropout
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::DropoutFuncOptions} class
@@ -69737,7 +69798,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.dropout2d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.dropout2d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::Dropout2dFuncOptions}
@@ -69763,7 +69824,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.dropout3d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.dropout3d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::Dropout3dFuncOptions}
@@ -69789,7 +69850,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.alpha_dropout
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.alpha_dropout
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::AlphaDropoutFuncOptions}
@@ -69816,7 +69877,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.feature_alpha_dropout
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.feature_alpha_dropout
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -69894,7 +69955,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.embedding
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.embedding
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::EmbeddingFuncOptions}
@@ -69918,7 +69979,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.embedding_bag
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.embedding_bag
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::EmbeddingBagFuncOptions}
@@ -69994,7 +70055,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.fold
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.fold
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::FoldFuncOptions} class to
@@ -70016,7 +70077,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.unfold
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.unfold
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::UnfoldFuncOptions} class
@@ -70093,7 +70154,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.instance_norm
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.instance_norm
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::InstanceNormFuncOptions}
@@ -70439,7 +70500,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.elu
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.elu
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::ELUFuncOptions} class to
@@ -70461,7 +70522,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.selu
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.selu
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::SELUFuncOptions} class to
@@ -70483,7 +70544,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.hardshrink
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.hardshrink
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::HardshrinkFuncOptions}
@@ -70507,7 +70568,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.hardtanh
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.hardtanh
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::HardtanhFuncOptions} class
@@ -70530,7 +70591,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.leaky_relu
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.leaky_relu
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::LeakyReLUFuncOptions}
@@ -70559,7 +70620,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.gumbel_softmax
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.gumbel_softmax
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::GumbelSoftmaxFuncOptions}
@@ -70585,7 +70646,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.softmax
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.softmax
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::SoftmaxFuncOptions} class
@@ -70607,7 +70668,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.softmin
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.softmin
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::SoftminFuncOptions} class
@@ -70629,7 +70690,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.log_softmax
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.log_softmax
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::LogSoftmaxFuncOptions}
@@ -70653,7 +70714,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.glu
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.glu
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::GLUFuncOptions} class to
@@ -70685,7 +70746,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.relu
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.relu
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::ReLUFuncOptions} class to
@@ -70707,7 +70768,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.relu6
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.relu6
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::ReLU6FuncOptions} class to
@@ -70729,7 +70790,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.rrelu
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.rrelu
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::RReLUFuncOptions} class to
@@ -70751,7 +70812,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.celu
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.celu
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::CELUFuncOptions} class to
@@ -70773,7 +70834,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.softplus
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.softplus
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::SoftplusFuncOptions} class
@@ -70797,7 +70858,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.softshrink
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.softshrink
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::SoftshrinkFuncOptions}
@@ -70829,7 +70890,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.threshold
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.threshold
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::ThresholdFuncOptions}
@@ -71179,7 +71240,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.l1_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.l1_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::L1LossFuncOptions} class
@@ -71204,7 +71265,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.kl_div
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.kl_div
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::KLDivFuncOptions} class to
@@ -71230,7 +71291,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.mse_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.mse_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::MSELossFuncOptions} class
@@ -71255,7 +71316,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.binary_cross_entropy
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.binary_cross_entropy
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -71282,7 +71343,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.hinge_embedding_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.hinge_embedding_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -71309,7 +71370,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.multi_margin_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.multi_margin_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -71336,7 +71397,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.cosine_embedding_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.cosine_embedding_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -71372,7 +71433,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.smooth_l1_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.smooth_l1_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::SmoothL1LossFuncOptions}
@@ -71391,7 +71452,7 @@ scalar_t sf(scalar_t x, scalar_t y)
     @Cast("const torch::nn::functional::SmoothL1LossFuncOptions*") @ByRef(nullValue = "torch::nn::functional::SmoothL1LossFuncOptions{}") SmoothL1LossOptions options);
 
 /** See
- *  https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.smooth_l1_loss
+ *  https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.smooth_l1_loss
  *  about the exact behavior of this functional.
  * 
  *  Example:
@@ -71414,7 +71475,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.huber_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.huber_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::HuberLossFuncOptions}
@@ -71440,7 +71501,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.multilabel_margin_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.multilabel_margin_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -71467,7 +71528,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.soft_margin_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.soft_margin_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::SoftMarginLossFuncOptions}
@@ -71493,7 +71554,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.multilabel_soft_margin_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.multilabel_soft_margin_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -71523,7 +71584,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.triplet_margin_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.triplet_margin_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -71551,7 +71612,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.triplet_margin_with_distance_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.triplet_margin_with_distance_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -71583,7 +71644,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.ctc_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.ctc_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::CTCLossFuncOptions} class
@@ -71611,7 +71672,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.poisson_nll_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.poisson_nll_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::PoissonNLLLossFuncOptions}
@@ -71640,7 +71701,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.margin_ranking_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.margin_ranking_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -71668,7 +71729,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.nll_loss
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.nll_loss
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::NLLLossFuncOptions} class
@@ -71694,7 +71755,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.cross_entropy
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.cross_entropy
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::CrossEntropyFuncOptions}
@@ -71723,7 +71784,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.binary_cross_entropy_with_logits
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.binary_cross_entropy_with_logits
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -71939,7 +72000,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.pad
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.pad
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::PadFuncOptions} class to
@@ -72469,7 +72530,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.avg_pool1d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.avg_pool1d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::AvgPool1dFuncOptions}
@@ -72491,7 +72552,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.avg_pool2d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.avg_pool2d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::AvgPool2dFuncOptions}
@@ -72513,7 +72574,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.avg_pool3d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.avg_pool3d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::AvgPool3dFuncOptions}
@@ -72537,7 +72598,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.max_pool1d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.max_pool1d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::MaxPool1dFuncOptions}
@@ -72576,7 +72637,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.max_pool2d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.max_pool2d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::MaxPool2dFuncOptions}
@@ -72615,7 +72676,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.max_pool3d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.max_pool3d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::MaxPool3dFuncOptions}
@@ -72656,7 +72717,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.adaptive_max_pool1d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.adaptive_max_pool1d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -72697,7 +72758,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.adaptive_max_pool2d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.adaptive_max_pool2d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -72738,7 +72799,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.adaptive_max_pool3d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.adaptive_max_pool3d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -72763,7 +72824,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.adaptive_avg_pool1d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.adaptive_avg_pool1d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -72786,7 +72847,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.adaptive_avg_pool2d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.adaptive_avg_pool2d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -72809,7 +72870,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.adaptive_avg_pool3d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.adaptive_avg_pool3d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -72847,7 +72908,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.max_unpool1d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.max_unpool1d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::MaxUnpool1dFuncOptions}
@@ -72871,7 +72932,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.max_unpool2d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.max_unpool2d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::MaxUnpool2dFuncOptions}
@@ -72895,7 +72956,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.max_unpool3d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.max_unpool3d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::MaxUnpool3dFuncOptions}
@@ -72998,7 +73059,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.lp_pool1d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.lp_pool1d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::LPPool1dFuncOptions} class
@@ -73020,7 +73081,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.lp_pool2d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.lp_pool2d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::LPPool2dFuncOptions} class
@@ -73042,7 +73103,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.lp_pool3d
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.lp_pool3d
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::LPPool3dFuncOptions} class
@@ -73132,7 +73193,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.normalize
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.normalize
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::NormalizeFuncOptions}
@@ -73158,7 +73219,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.layer_norm
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.layer_norm
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::LayerNormFuncOptions}
@@ -73182,7 +73243,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.local_response_norm
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.local_response_norm
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for
@@ -73207,7 +73268,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.group_norm
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.group_norm
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::GroupNormFuncOptions}
@@ -73280,7 +73341,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.pixel_shuffle
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.pixel_shuffle
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::PixelShuffleFuncOptions}
@@ -73344,13 +73405,13 @@ scalar_t sf(scalar_t x, scalar_t y)
 ///
 @Namespace("torch::nn::functional") public static native @ByVal @Cast("std::vector*") LongVector _interp_output_size(
     @Cast("int64_t") long dim,
-    @ByVal @Cast("std::tuple >,c10::optional >,c10::optional >*") Pointer closed_over_args);
+    @ByVal @Cast("std::tuple >,std::optional >,std::optional >*") Pointer closed_over_args);
 
 // #ifndef DOXYGEN_SHOULD_SKIP_THIS
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.interpolate
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.interpolate
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::InterpolateFuncOptions}
@@ -73421,7 +73482,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif /* DOXYGEN_SHOULD_SKIP_THIS */
 
 /** See
-/** https://pytorch.org/docs/master/nn.functional.html#torch.nn.functional.grid_sample
+/** https://pytorch.org/docs/main/nn.functional.html#torch.nn.functional.grid_sample
 /** about the exact behavior of this functional.
 /**
 /** See the documentation for {@code torch::nn::functional::GridSampleFuncOptions}
@@ -75535,7 +75596,7 @@ scalar_t sf(scalar_t x, scalar_t y)
     @ByVal PackedSequence sequence,
     @Cast("bool") boolean batch_first/*=false*/,
     double padding_value/*=0.0*/,
-    @ByVal(nullValue = "c10::optional(torch::nullopt)") LongOptional total_length);
+    @ByVal(nullValue = "std::optional(torch::nullopt)") LongOptional total_length);
 @Namespace("torch::nn::utils::rnn") public static native @ByVal T_TensorTensor_T pad_packed_sequence(
     @ByVal PackedSequence sequence);
 
@@ -76466,6 +76527,27 @@ scalar_t sf(scalar_t x, scalar_t y)
  // namespace torch
 
 
+// Parsed from torch/csrc/api/include/torch/optim/schedulers/reduce_on_plateau_scheduler.h
+
+// #pragma once
+
+// #include 
+// #include 
+
+// #include 
+
+// #include 
+
+// #include 
+
+// #include 
+// Targeting ../ReduceLROnPlateauScheduler.java
+
+
+ // namespace optim
+ // namespace torch
+
+
 // Parsed from torch/csrc/api/include/torch/optim/schedulers/step_lr.h
 
 // #pragma once
@@ -76491,6 +76573,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #include 
 
 // #include 
+// #include 
 // #include 
 
 
@@ -76511,7 +76594,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #include 
 
 /** Computes the natural logarithm of the absolute value of the gamma function
- *  See https://pytorch.org/docs/master/special.html#torch.special.gammaln.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.gammaln.
  * 
  *  Example:
  *  
{@code
@@ -76525,7 +76608,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::special") public static native @ByRef Tensor gammaln_out(@ByRef Tensor result, @Const @ByRef Tensor self);
 
 /** Computes the regularized lower incomplete gamma function
- *  See https://pytorch.org/docs/master/special.html#torch.special.gammainc.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.gammainc.
  * 
  *  Example:
  *  
{@code
@@ -76543,7 +76626,7 @@ scalar_t sf(scalar_t x, scalar_t y)
     @Const @ByRef Tensor other);
 
 /** Computes the regularized upper incomplete gamma function
- *  See https://pytorch.org/docs/master/special.html#torch.special.gammainc.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.gammainc.
  * 
  *  Example:
  *  
{@code
@@ -76561,7 +76644,7 @@ scalar_t sf(scalar_t x, scalar_t y)
     @Const @ByRef Tensor other);
 
 /** Computes the multivariate log-gamma function with dimension {@code p}, elementwise
- *  See https://pytorch.org/docs/master/special.html#torch.special.multigammaln.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.multigammaln.
  * 
  *  Example:
  *  
{@code
@@ -76575,7 +76658,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::special") public static native @ByRef Tensor multigammaln_out(@ByRef Tensor result, @Const @ByRef Tensor self, @Cast("int64_t") long p);
 
 /** Computes the nth derivative of the digamma function on the input.
- *  See https:://pytorch.org/docs/master/special.html#torch.special.polygamma.
+ *  See https:://pytorch.org/docs/main/special.html#torch.special.polygamma.
  * 
  *  Example:
  *  
{@code
@@ -76584,7 +76667,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  }
*/ /** Computes the logarithmic derivative of the gamma function on input - * See https://pytorch.org/docs/master/special.html#torch.special.psi + * See https://pytorch.org/docs/main/special.html#torch.special.psi * * Example: *
{@code
@@ -76598,7 +76681,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::special") public static native @ByRef Tensor psi_out(@ByRef Tensor result, @Const @ByRef Tensor self);
 
 /** Computes the logarithmic derivative of the gamma function on input
- *  See https://pytorch.org/docs/master/special.html#torch.special.digamma
+ *  See https://pytorch.org/docs/main/special.html#torch.special.digamma
  * 
  *  Example:
  *  
{@code
@@ -76607,7 +76690,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  }
*/ /** Computes entropy of input, elementwise - * See https://pytorch.org/docs/master/special.html#torch.special.entr. + * See https://pytorch.org/docs/main/special.html#torch.special.entr. * * Example: *
{@code
@@ -76621,7 +76704,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::special") public static native @ByRef Tensor entr_out(@ByRef Tensor result, @Const @ByRef Tensor self);
 
 /** Computes the error function
- *  See https://pytorch.org/docs/master/special.html#torch.special.erf.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.erf.
  * 
  *  Example:
  *  
{@code
@@ -76630,7 +76713,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  }
*/ /** Computes the complementary error function - * See https://pytorch.org/docs/master/special.html#torch.special.erfc. + * See https://pytorch.org/docs/main/special.html#torch.special.erfc. * * Example: *
{@code
@@ -76639,7 +76722,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  }
*/ /** Computes the scaled complementary error function - * See https://pytorch.org/docs/master/special.html#torch.special.erfcx. + * See https://pytorch.org/docs/main/special.html#torch.special.erfcx. * * Example: *
{@code
@@ -76653,7 +76736,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::special") public static native @ByRef Tensor erfcx_out(@ByRef Tensor result, @Const @ByRef Tensor self);
 
 /** Computes the inverse error function
- *  See https://pytorch.org/docs/master/special.html#torch.special.erfinv.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.erfinv.
  * 
  *  Example:
  *  
{@code
@@ -76663,7 +76746,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Computes the log of summed exponentials of each row of input in the given
  *  dimension dim See
- *  https://pytorch.org/docs/master/special.html#torch.special.logsumexp.
+ *  https://pytorch.org/docs/main/special.html#torch.special.logsumexp.
  * 
  *  Example:
  *  
{@code
@@ -76674,7 +76757,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Computes the argument, x, for which the area under the Gaussian probability
  *  density function (integrated from minus infinity to x) is equal to input,
  *  elementwise. See
- *  https://pytorch.org/docs/master/special.html#torch.special.ndtri
+ *  https://pytorch.org/docs/main/special.html#torch.special.ndtri
  * 
  *  Example:
  *  
{@code
@@ -76689,7 +76772,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Computes the log of area under the standard Gaussian probability density
  *  function, integrated from minus infinity to :attr:{@code input}, elementwise See
- *  https://pytorch.org/docs/master/special.html#torch.special.log_ndtr
+ *  https://pytorch.org/docs/main/special.html#torch.special.log_ndtr
  * 
  *  Example:
  *  
{@code
@@ -76703,7 +76786,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::special") public static native @ByRef Tensor log_ndtr_out(@ByRef Tensor result, @Const @ByRef Tensor self);
 
 /** Computes the logit of input, elementwise.
- *  See https://pytorch.org/docs/master/special.html#torch.special.logit.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.logit.
  * 
  *  Example:
  *  
{@code
@@ -76713,7 +76796,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Computes the expit (also known as the logistic sigmoid function) of input,
  *  elementwise See
- *  https://pytorch.org/docs/master/special.html#torch.special.expit.
+ *  https://pytorch.org/docs/main/special.html#torch.special.expit.
  * 
  *  Example:
  *  
{@code
@@ -76727,7 +76810,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::special") public static native @ByRef Tensor expit_out(@ByRef Tensor result, @Const @ByRef Tensor self);
 
 /** Computes the base two exponential function of :attr:{@code input}, elementwise
- *  See https://pytorch.org/docs/master/special.html#torch.special.exp2.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.exp2.
  * 
  *  Example:
  *  
{@code
@@ -76736,7 +76819,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  }
*/ /** Computes the exponential of the elements minus 1, elementwise - * See https://pytorch.org/docs/master/special.html#torch.special.expm1. + * See https://pytorch.org/docs/main/special.html#torch.special.expm1. * * Example: *
{@code
@@ -76745,7 +76828,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  }
*/ /** Computes x * log(y) for inputs, elementwise - * See https://pytorch.org/docs/master/special.html#torch.special.xlogy. + * See https://pytorch.org/docs/main/special.html#torch.special.xlogy. * * Example: *
{@code
@@ -76755,7 +76838,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  }
*/ /** Computes x * log1p(y) for inputs, elementwise - * See https://pytorch.org/docs/master/special.html#torch.special.xlog1py. + * See https://pytorch.org/docs/main/special.html#torch.special.xlog1py. * * Example: *
{@code
@@ -76787,7 +76870,7 @@ scalar_t sf(scalar_t x, scalar_t y)
     @Const @ByRef Scalar other);
 
 /** Computes Hurwitz Zeta function for inputs, elementwise
- *  See https://pytorch.org/docs/master/special.html#torch.special.zeta.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.zeta.
  * 
  *  Example:
  *  
{@code
@@ -76820,7 +76903,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Computes the zeroth order modified Bessel function of the first kind of
  *  input, elementwise See
- *  https://pytorch.org/docs/master/special.html#torch.special.i0
+ *  https://pytorch.org/docs/main/special.html#torch.special.i0
  * 
  *  Example:
  *  
{@code
@@ -76830,7 +76913,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Computes the area under the standard Gaussian probability density function,
  *  integrated from minus infinity to :attr:{@code input}, elementwise
- *  See https://pytorch.org/docs/master/special.html#torch.special.ndtr
+ *  See https://pytorch.org/docs/main/special.html#torch.special.ndtr
  * 
  *  Example:
  *  
{@code
@@ -76845,7 +76928,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Computes the exponentially scaled zeroth order modified Bessel function of
  *  the first kind See
- *  https://pytorch.org/docs/master/special.html#torch.special.i0e.
+ *  https://pytorch.org/docs/main/special.html#torch.special.i0e.
  * 
  *  Example:
  *  
{@code
@@ -76859,7 +76942,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::special") public static native @ByRef Tensor i0e_out(@ByRef Tensor result, @Const @ByRef Tensor self);
 
 /** Computes the first order modified Bessel function of the first kind
- *  See https://pytorch.org/docs/master/special.html#torch.special.i1.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.i1.
  * 
  *  Example:
  *  
{@code
@@ -76874,7 +76957,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Computes the exponentially scaled first order modified Bessel function of
  *  the first kind See
- *  https://pytorch.org/docs/master/special.html#torch.special.i1e.
+ *  https://pytorch.org/docs/main/special.html#torch.special.i1e.
  * 
  *  Example:
  *  
{@code
@@ -76888,7 +76971,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::special") public static native @ByRef Tensor i1e_out(@ByRef Tensor result, @Const @ByRef Tensor self);
 
 /** Computes the sinc of input, elementwise
- *  See https://pytorch.org/docs/master/special.html#torch.special.sinc.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.sinc.
  * 
  *  Example:
  *  
{@code
@@ -76897,7 +76980,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  }
*/ /** Rounds the elements of the input - * See https://pytorch.org/docs/master/special.html#torch.special.round. + * See https://pytorch.org/docs/main/special.html#torch.special.round. * * Example: *
{@code
@@ -76906,7 +76989,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  }
*/ /** Computes log(1 + x) of the input, elementwise - * See https://pytorch.org/docs/master/special.html#torch.special.log1p. + * See https://pytorch.org/docs/main/special.html#torch.special.log1p. * * Example: *
{@code
@@ -76916,7 +76999,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 
 /** Computes log followed by softmax(x) of the input
- *  See https://pytorch.org/docs/master/special.html#torch.special.log_softmax.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.log_softmax.
  * 
  *  Example:
  *  
{@code
@@ -76925,7 +77008,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  }
*/ /** Computes softmax of the input along a given dimension - * See https://pytorch.org/docs/master/special.html#torch.special.softmax. + * See https://pytorch.org/docs/main/special.html#torch.special.softmax. * * Example: *
{@code
@@ -76935,7 +77018,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Airy function Ai.
  * 
- *  See https://pytorch.org/docs/master/special.html#torch.special.airy_ai.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.airy_ai.
  * 
  *  Example:
  * 
@@ -76955,7 +77038,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Bessel function of the first kind of order 0.
  * 
- *  See https://pytorch.org/docs/master/special.html#torch.special.bessel_j0.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.bessel_j0.
  * 
  *  Example:
  * 
@@ -76975,7 +77058,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Bessel function of the first kind of order 1.
  * 
- *  See https://pytorch.org/docs/master/special.html#torch.special.bessel_j1.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.bessel_j1.
  * 
  *  Example:
  * 
@@ -76995,7 +77078,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Bessel function of the second kind of order 0.
  * 
- *  See https://pytorch.org/docs/master/special.html#torch.special.bessel_y0.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.bessel_y0.
  * 
  *  Example:
  * 
@@ -77015,7 +77098,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 /** Bessel function of the second kind of order 1.
  * 
- *  See https://pytorch.org/docs/master/special.html#torch.special.bessel_y1.
+ *  See https://pytorch.org/docs/main/special.html#torch.special.bessel_y1.
  * 
  *  Example:
  * 
@@ -77036,7 +77119,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Chebyshev polynomial of the first kind.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.chebyshev_polynomial_t.
+ *  https://pytorch.org/docs/main/special.html#torch.special.chebyshev_polynomial_t.
  * 
  *  Example:
  * 
@@ -77075,7 +77158,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Chebyshev polynomial of the second kind.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.chebyshev_polynomial_u.
+ *  https://pytorch.org/docs/main/special.html#torch.special.chebyshev_polynomial_u.
  * 
  *  Example:
  * 
@@ -77114,7 +77197,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Chebyshev polynomial of the third kind.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.chebyshev_polynomial_v.
+ *  https://pytorch.org/docs/main/special.html#torch.special.chebyshev_polynomial_v.
  * 
  *  Example:
  * 
@@ -77153,7 +77236,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Chebyshev polynomial of the fourth kind.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.chebyshev_polynomial_w.
+ *  https://pytorch.org/docs/main/special.html#torch.special.chebyshev_polynomial_w.
  * 
  *  Example:
  * 
@@ -77192,7 +77275,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Physicist’s Hermite polynomial.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.hermite_polynomial_h.
+ *  https://pytorch.org/docs/main/special.html#torch.special.hermite_polynomial_h.
  * 
  *  Example:
  * 
@@ -77231,7 +77314,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Probabilist’s Hermite polynomial.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.hermite_polynomial_he.
+ *  https://pytorch.org/docs/main/special.html#torch.special.hermite_polynomial_he.
  * 
  *  Example:
  * 
@@ -77270,7 +77353,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Laguerre polynomial.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.laguerre_polynomial_l.
+ *  https://pytorch.org/docs/main/special.html#torch.special.laguerre_polynomial_l.
  * 
  *  Example:
  * 
@@ -77309,7 +77392,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Legendre polynomial.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.legendre_polynomial_p.
+ *  https://pytorch.org/docs/main/special.html#torch.special.legendre_polynomial_p.
  * 
  *  Example:
  * 
@@ -77348,7 +77431,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Modified Bessel function of the first kind of order 0.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.modified_bessel_i0.
+ *  https://pytorch.org/docs/main/special.html#torch.special.modified_bessel_i0.
  * 
  *  Example:
  * 
@@ -77369,7 +77452,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Modified Bessel function of the first kind of order 1.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.modified_bessel_i1.
+ *  https://pytorch.org/docs/main/special.html#torch.special.modified_bessel_i1.
  * 
  *  Example:
  * 
@@ -77390,7 +77473,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Modified Bessel function of the second kind of order 0.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.modified_bessel_k0.
+ *  https://pytorch.org/docs/main/special.html#torch.special.modified_bessel_k0.
  * 
  *  Example:
  * 
@@ -77411,7 +77494,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Modified Bessel function of the second kind of order 1.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.modified_bessel_k1.
+ *  https://pytorch.org/docs/main/special.html#torch.special.modified_bessel_k1.
  * 
  *  Example:
  * 
@@ -77432,7 +77515,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Scaled modified Bessel function of the second kind of order 0.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.scaled_modified_bessel_k0.
+ *  https://pytorch.org/docs/main/special.html#torch.special.scaled_modified_bessel_k0.
  * 
  *  Example:
  * 
@@ -77453,7 +77536,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Scaled modified Bessel function of the second kind of order 1.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.scaled_modified_bessel_k1.
+ *  https://pytorch.org/docs/main/special.html#torch.special.scaled_modified_bessel_k1.
  * 
  *  Example:
  * 
@@ -77474,7 +77557,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Shifted Chebyshev polynomial of the first kind.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.shifted_chebyshev_polynomial_t.
+ *  https://pytorch.org/docs/main/special.html#torch.special.shifted_chebyshev_polynomial_t.
  * 
  *  Example:
  * 
@@ -77513,7 +77596,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Shifted Chebyshev polynomial of the second kind.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.shifted_chebyshev_polynomial_u.
+ *  https://pytorch.org/docs/main/special.html#torch.special.shifted_chebyshev_polynomial_u.
  * 
  *  Example:
  * 
@@ -77552,7 +77635,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Shifted Chebyshev polynomial of the third kind.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.shifted_chebyshev_polynomial_v.
+ *  https://pytorch.org/docs/main/special.html#torch.special.shifted_chebyshev_polynomial_v.
  * 
  *  Example:
  * 
@@ -77591,7 +77674,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Shifted Chebyshev polynomial of the fourth kind.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.shifted_chebyshev_polynomial_w.
+ *  https://pytorch.org/docs/main/special.html#torch.special.shifted_chebyshev_polynomial_w.
  * 
  *  Example:
  * 
@@ -77630,7 +77713,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 /** Spherical Bessel function of the first kind of order 0.
  * 
  *  See
- *  https://pytorch.org/docs/master/special.html#torch.special.spherical_bessel_j0.
+ *  https://pytorch.org/docs/main/special.html#torch.special.spherical_bessel_j0.
  * 
  *  Example:
  * 
@@ -77654,14 +77737,14 @@ scalar_t sf(scalar_t x, scalar_t y)
 public static final int TORCH_VERSION_MAJOR = 2;
 
 /** Indicates the minor version of LibTorch. */
-public static final int TORCH_VERSION_MINOR = 3;
+public static final int TORCH_VERSION_MINOR = 4;
 
 /** Indicates the patch version of LibTorch. */
 public static final int TORCH_VERSION_PATCH = 0;
 
 /** Indicates the version of LibTorch. */
 public static final String TORCH_VERSION = 
-  "2.3.0";
+  "2.4.0";
 
 
 // Parsed from torch/csrc/api/include/torch/xpu.h
@@ -78036,9 +78119,6 @@ scalar_t sf(scalar_t x, scalar_t y)
 // Targeting ../Call.java
 
 
-// Targeting ../ErrorReport.java
-
-
 
  // namespace jit
  // namespace torch
@@ -78086,7 +78166,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 @Namespace("torch::jit") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer out, @ByVal pretty_tree t_);
 
-@Namespace("torch::jit") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer out, @Const @ByRef TreeRef t);
+@Namespace("torch::jit") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer out, @IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree t);
 
  // namespace jit
  // namespace torch
@@ -78468,100 +78548,100 @@ scalar_t sf(scalar_t x, scalar_t y)
  // namespace caffe2
 
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @StdString BytePointer filename,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device,
     @Cast("bool") boolean load_debug_files/*=true*/);
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @StdString BytePointer filename);
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @StdString String filename,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device,
     @Cast("bool") boolean load_debug_files/*=true*/);
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @StdString String filename);
 
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @Cast("std::istream*") @ByRef Pointer in,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device,
     @Cast("bool") boolean load_debug_files/*=true*/);
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @Cast("std::istream*") @ByRef Pointer in);
 
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @UniquePtr ReadAdapterInterface rai,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device,
     @Cast("bool") boolean load_debug_files/*=true*/);
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @UniquePtr ReadAdapterInterface rai);
 
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @StdString BytePointer filename,
     @ByVal DeviceOptional device,
     @ByRef ExtraFilesMap extra_files,
     @Cast("bool") boolean load_debug_files/*=true*/,
     @Cast("bool") boolean restore_shapes/*=false*/);
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @StdString BytePointer filename,
     @ByVal DeviceOptional device,
     @ByRef ExtraFilesMap extra_files);
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @StdString String filename,
     @ByVal DeviceOptional device,
     @ByRef ExtraFilesMap extra_files,
     @Cast("bool") boolean load_debug_files/*=true*/,
     @Cast("bool") boolean restore_shapes/*=false*/);
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @StdString String filename,
     @ByVal DeviceOptional device,
     @ByRef ExtraFilesMap extra_files);
 
 // For reading unified serialization format from torch.Package
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @ByVal @Cast("std::shared_ptr*") Pointer reader,
     @SharedPtr DeserializationStorageContext storage_context,
     @ByVal DeviceOptional device,
     @StdString BytePointer ts_id);
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @ByVal @Cast("std::shared_ptr*") Pointer reader,
     @SharedPtr DeserializationStorageContext storage_context,
     @ByVal DeviceOptional device,
     @StdString String ts_id);
 
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @Cast("std::istream*") @ByRef Pointer in,
     @ByVal DeviceOptional device,
     @ByRef ExtraFilesMap extra_files,
     @Cast("bool") boolean load_debug_files/*=true*/,
     @Cast("bool") boolean restore_shapes/*=false*/);
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @Cast("std::istream*") @ByRef Pointer in,
     @ByVal DeviceOptional device,
     @ByRef ExtraFilesMap extra_files);
 
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @UniquePtr ReadAdapterInterface rai,
     @ByVal DeviceOptional device,
     @ByRef ExtraFilesMap extra_files,
     @Cast("bool") boolean load_debug_files/*=true*/);
 @Namespace("torch::jit") public static native @ByVal JitModule import_ir_module(
-    @SharedPtr CompilationUnit cu,
+    @SharedPtr("torch::jit::CompilationUnit") @ByVal CompilationUnit cu,
     @UniquePtr ReadAdapterInterface rai,
     @ByVal DeviceOptional device,
     @ByRef ExtraFilesMap extra_files);
@@ -78572,7 +78652,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  {@code torch::jit::ExportModule} in C++. */
 @Namespace("torch::jit") public static native @ByVal JitModule load(
     @Cast("std::istream*") @ByRef Pointer in,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device,
     @Cast("bool") boolean load_debug_files/*=true*/);
 @Namespace("torch::jit") public static native @ByVal JitModule load(
     @Cast("std::istream*") @ByRef Pointer in);
@@ -78596,13 +78676,13 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  Python or {@code torch::jit::ExportModule} in C++. */
 @Namespace("torch::jit") public static native @ByVal JitModule load(
     @StdString BytePointer filename,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device,
     @Cast("bool") boolean load_debug_files/*=true*/);
 @Namespace("torch::jit") public static native @ByVal JitModule load(
     @StdString BytePointer filename);
 @Namespace("torch::jit") public static native @ByVal JitModule load(
     @StdString String filename,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device,
     @Cast("bool") boolean load_debug_files/*=true*/);
 @Namespace("torch::jit") public static native @ByVal JitModule load(
     @StdString String filename);
@@ -78635,7 +78715,7 @@ scalar_t sf(scalar_t x, scalar_t y)
  *  Python or {@code torch::jit::ExportModule} in C++. */
 @Namespace("torch::jit") public static native @ByVal JitModule load(
     @SharedPtr("caffe2::serialize::ReadAdapterInterface") @ByVal ReadAdapterInterface rai,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device,
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device,
     @Cast("bool") boolean load_debug_files/*=true*/);
 @Namespace("torch::jit") public static native @ByVal JitModule load(
     @SharedPtr("caffe2::serialize::ReadAdapterInterface") @ByVal ReadAdapterInterface rai);
@@ -78660,7 +78740,7 @@ scalar_t sf(scalar_t x, scalar_t y)
     @Cast("char*") @SharedPtr BytePointer data,
     @Cast("size_t") long size,
     @ByRef ExtraFilesMap extra_files,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device);
 @Namespace("torch::jit") public static native @ByVal JitModule parse_and_initialize_jit_module(
     @Cast("char*") @SharedPtr BytePointer data,
     @Cast("size_t") long size,
@@ -78669,7 +78749,7 @@ scalar_t sf(scalar_t x, scalar_t y)
     @Cast("char*") @SharedPtr ByteBuffer data,
     @Cast("size_t") long size,
     @ByRef ExtraFilesMap extra_files,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device);
 @Namespace("torch::jit") public static native @ByVal JitModule parse_and_initialize_jit_module(
     @Cast("char*") @SharedPtr ByteBuffer data,
     @Cast("size_t") long size,
@@ -78678,7 +78758,7 @@ scalar_t sf(scalar_t x, scalar_t y)
     @Cast("char*") @SharedPtr byte[] data,
     @Cast("size_t") long size,
     @ByRef ExtraFilesMap extra_files,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device);
 @Namespace("torch::jit") public static native @ByVal JitModule parse_and_initialize_jit_module(
     @Cast("char*") @SharedPtr byte[] data,
     @Cast("size_t") long size,
@@ -78687,14 +78767,14 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::jit") public static native @ByVal JitModule load_jit_module_from_file(
     @StdString BytePointer filename,
     @ByRef ExtraFilesMap extra_files,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device);
 @Namespace("torch::jit") public static native @ByVal JitModule load_jit_module_from_file(
     @StdString BytePointer filename,
     @ByRef ExtraFilesMap extra_files);
 @Namespace("torch::jit") public static native @ByVal JitModule load_jit_module_from_file(
     @StdString String filename,
     @ByRef ExtraFilesMap extra_files,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device);
 @Namespace("torch::jit") public static native @ByVal JitModule load_jit_module_from_file(
     @StdString String filename,
     @ByRef ExtraFilesMap extra_files);
@@ -78702,12 +78782,12 @@ scalar_t sf(scalar_t x, scalar_t y)
 @Namespace("torch::jit") public static native @ByVal JitModule load_jit_module_from_stream(
     @Cast("std::istream*") @ByRef Pointer in,
     @ByRef ExtraFilesMap extra_files,
-    @ByVal(nullValue = "c10::optional(c10::nullopt)") DeviceOptional device);
+    @ByVal(nullValue = "std::optional(c10::nullopt)") DeviceOptional device);
 @Namespace("torch::jit") public static native @ByVal JitModule load_jit_module_from_stream(
     @Cast("std::istream*") @ByRef Pointer in,
     @ByRef ExtraFilesMap extra_files);
 
-@Namespace("torch::jit") public static native @ByVal ObjPtr ObjLoaderFunc(
+@Namespace("torch::jit") public static native @IntrusivePtr("c10::ivalue::Object") @Cast({"", "c10::intrusive_ptr&"}) Obj ObjLoaderFunc(
     @Const @ByRef StrongTypePtr type,
     @ByVal IValue input);
 
@@ -78867,7 +78947,7 @@ scalar_t sf(scalar_t x, scalar_t y)
 
 // Dynamically obtain serialization function pairs
 // that require the corresponding backend.
-@Namespace("torch::jit") public static native @Cast("std::array >,at::COMPILE_TIME_MAX_DEVICE_TYPES>*") @ByRef PointerPairOptional GetBackendMetaSerialization();
+@Namespace("torch::jit") public static native @Cast("std::array >,at::COMPILE_TIME_MAX_DEVICE_TYPES>*") @ByRef PointerPairOptional GetBackendMetaSerialization();
 
 // Register function pointer of Tensor BackendMetadata for serialization.
 @Namespace("torch::jit") public static native void TensorBackendMetaRegistry(
@@ -78953,7 +79033,7 @@ scalar_t sf(scalar_t x, scalar_t y)
     @ByRef Graph graph,
     @ByVal NamedValueArrayRef args,
     @ByVal NamedValueArrayRef kwargs,
-    @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") NamedValueOptional self);
+    @Const @ByRef(nullValue = "std::optional(c10::nullopt)") NamedValueOptional self);
 @Namespace("torch::jit") public static native @ByVal MatchedSchema matchSchema(
     @Const @ByRef FunctionSchema schema,
     @Const @ByRef SourceRange loc,
@@ -78967,7 +79047,7 @@ scalar_t sf(scalar_t x, scalar_t y)
     @ByRef Graph graph,
     @ByVal NamedValueArrayRef args,
     @ByVal NamedValueArrayRef kwargs,
-    @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") NamedValueOptional self,
+    @Const @ByRef(nullValue = "std::optional(c10::nullopt)") NamedValueOptional self,
     @Cast("bool") boolean render_errors/*=false*/);
 @Namespace("torch::jit") public static native @ByVal SizeTMatchedSchemaPair matchSchemas(
     @Const @ByRef FunctionSchemaVector schemas,
@@ -78988,7 +79068,7 @@ scalar_t sf(scalar_t x, scalar_t y)
     @ByVal Symbol name,
     @ByVal NamedValueArrayRef args,
     @ByVal NamedValueArrayRef kwargs,
-    @Const @ByRef(nullValue = "c10::optional(c10::nullopt)") NamedValueOptional self);
+    @Const @ByRef(nullValue = "std::optional(c10::nullopt)") NamedValueOptional self);
 @Namespace("torch::jit") public static native Value emitBuiltinCall(
     @Const @ByRef SourceRange loc,
     @ByRef Graph graph,
@@ -79351,10 +79431,10 @@ scalar_t sf(scalar_t x, scalar_t y)
  *    print(values)
  * 
  *  \endrst */
-@Namespace("torch::jit") public static native @Cast("char*") @StdVector BytePointer pickle(
+@Namespace("torch::jit") public static native @ByVal @Cast("std::vector*") ByteVector pickle(
     @Const @ByRef IValue ivalue,
     TensorVector tensor_table/*=nullptr*/);
-@Namespace("torch::jit") public static native @Cast("char*") @StdVector BytePointer pickle(
+@Namespace("torch::jit") public static native @ByVal @Cast("std::vector*") ByteVector pickle(
     @Const @ByRef IValue ivalue);
 
 /** Save a {@code torch::IValue} in a format that can be loaded by both
@@ -79514,6 +79594,796 @@ scalar_t sf(scalar_t x, scalar_t y)
 // #endif
 
 
+// Parsed from torch/csrc/distributed/c10d/Store.hpp
+
+// #pragma once
+
+// #include 
+// #include 
+// #include 
+// #include 
+
+// #include 
+// #include 
+
+// callback function will be given arguments (optional oldValue,
+// optional newValue)
+// Targeting ../Store.java
+
+
+// Targeting ../StoreTimeoutGuard.java
+
+
+
+ // namespace c10d
+
+
+// Parsed from torch/csrc/distributed/c10d/Types.hpp
+
+// #pragma once
+
+// #include 
+
+// #include 
+// #include 
+
+// #include 
+// #include 
+
+// #include 
+// #include 
+// Targeting ../_SupplementBase.java
+
+
+// Targeting ../NCCLPreMulSumSupplement.java
+
+
+// Targeting ../ReduceOp.java
+
+
+// Targeting ../BroadcastOptions.java
+
+
+// Targeting ../AllreduceOptions.java
+
+
+// Targeting ../AllreduceCoalescedOptions.java
+
+
+// Targeting ../ReduceOptions.java
+
+
+// Targeting ../AllgatherOptions.java
+
+
+// Targeting ../GatherOptions.java
+
+
+// Targeting ../ScatterOptions.java
+
+
+// Targeting ../ReduceScatterOptions.java
+
+
+// Targeting ../AllToAllOptions.java
+
+
+// Targeting ../BarrierOptions.java
+
+
+// Targeting ../DistributedBackendOptions.java
+
+
+
+ // namespace c10d
+
+
+// Parsed from torch/csrc/distributed/c10d/Utils.hpp
+
+// #pragma once
+
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+
+// #ifdef _WIN32
+// #else
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #endif
+
+// #include 
+
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+
+@Namespace("c10d") public static native @Cast("size_t") long getTensorsNumel(@Const @ByRef TensorVector tensors);
+
+// Retrieve tensor shapes from a given tensor.
+@Namespace("c10d") public static native @ByVal TensorVector getTensorShapes(
+    @Const @ByRef TensorVector tensors);
+
+// Use -2 to represent unset state of env vars
+public static final int C10D_ENV_NOT_SET = -2;
+
+// #define WARN_ENV_VAR_ONCE(deprecated_env, new_env)
+//   TORCH_WARN_ONCE(
+//       "Environment variable " + deprecated_env + " is deprecated; use " +
+//       new_env + " instead");
+
+// Turns at::IntArrayRef into "(1, 2, 3, 4)".
+@Namespace("c10d") public static native @StdString BytePointer toString(@ByVal LongArrayRef l);
+@Namespace("c10d") public static native @StdString String toString(@ByVal @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... l);
+
+@Namespace("c10d") public static native @StdString BytePointer toString(Layout layout);
+
+@Namespace("c10d") public static native @ByVal StringVector split(
+    @Cast("char") byte separator,
+    @StdString BytePointer string);
+@Namespace("c10d") public static native @ByVal StringVector split(
+    @Cast("char") byte separator,
+    @StdString String string);
+
+@Namespace("c10d") public static native @StdString BytePointer getCvarString(
+    @Const @ByRef StringVector env,
+    @Cast("const char*") BytePointer def);
+@Namespace("c10d") public static native @StdString String getCvarString(
+    @Const @ByRef StringVector env,
+    String def);
+
+@Namespace("c10d") public static native int getCvarInt(@Const @ByRef StringVector env, int def);
+
+@Namespace("c10d") public static native @Cast("bool") boolean getCvarBool(@Const @ByRef StringVector env, @Cast("bool") boolean def);
+
+@Namespace("c10d") public static native void assertSameSizes(
+    @Const @ByRef LongArrayRef sizes,
+    @Const @ByRef TensorVector tensors);
+@Namespace("c10d") public static native void assertSameSizes(
+    @ByRef @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] sizes,
+    @Const @ByRef TensorVector tensors);
+
+@Namespace("c10d") public static native void assertSameSizeAndType(@Const @ByRef TensorVector tensors);
+
+@Namespace("c10d") public static native void assertTypeMatch(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByRef TensorOptions options,
+    @Const @ByVal TensorArrayRef tensors,
+    @Cast("size_t") long index);
+@Namespace("c10d") public static native void assertTypeMatch(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByRef TensorOptions options,
+    @Const @ByVal TensorVector tensors,
+    @Cast("size_t") long index);
+
+@Namespace("c10d") public static native void assertSizesMatch(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByRef LongArrayRef sizes,
+    @Const @ByVal TensorArrayRef tensors,
+    @Cast("size_t") long index);
+@Namespace("c10d") public static native void assertSizesMatch(
+    @Const @ByRef StringConsumer fn,
+    @ByRef @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long[] sizes,
+    @Const @ByVal TensorVector tensors,
+    @Cast("size_t") long index);
+
+@Namespace("c10d") public static native void assertLayoutMatch(
+    @Const @ByRef StringConsumer fn,
+    Layout expected,
+    @Const @ByVal TensorArrayRef tensors,
+    @Cast("size_t") long index);
+@Namespace("c10d") public static native void assertLayoutMatch(
+    @Const @ByRef StringConsumer fn,
+    @Cast("c10::Layout") byte expected,
+    @Const @ByVal TensorVector tensors,
+    @Cast("size_t") long index);
+@Namespace("c10d") public static native void assertLayoutMatch(
+    @Const @ByRef StringConsumer fn,
+    Layout expected,
+    @Const @ByVal TensorVector tensors,
+    @Cast("size_t") long index);
+@Namespace("c10d") public static native void assertLayoutMatch(
+    @Const @ByRef StringConsumer fn,
+    @Cast("c10::Layout") byte expected,
+    @Const @ByVal TensorArrayRef tensors,
+    @Cast("size_t") long index);
+
+@Namespace("c10d") public static native void assertLayoutMatch(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorArrayRef tensors);
+@Namespace("c10d") public static native void assertLayoutMatch(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorVector tensors);
+
+@Namespace("c10d") public static native void assertNonEmpty(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorArrayRef tensors);
+@Namespace("c10d") public static native void assertNonEmpty(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorVector tensors);
+
+@Namespace("c10d") public static native void assertSingleElement(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorArrayRef tensors);
+@Namespace("c10d") public static native void assertSingleElement(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorVector tensors);
+
+@Namespace("c10d") public static native void assertSingleElementInput(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorArrayRef tensors);
+@Namespace("c10d") public static native void assertSingleElementInput(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorVector tensors);
+
+@Namespace("c10d") public static native void assertSingleElementOutput(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorArrayRef tensors);
+@Namespace("c10d") public static native void assertSingleElementOutput(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorVector tensors);
+
+@Namespace("c10d") public static native void assertRootRank(
+    @Const @ByRef StringConsumer fn,
+    @Cast("int64_t") long rank,
+    @Cast("int64_t") long size);
+
+@Namespace("c10d") public static native void assertRootTensor(
+    @Const @ByRef StringConsumer fn,
+    @Cast("int64_t") long rank,
+    @Cast("int64_t") long size);
+
+@Namespace("c10d") public static native void assertDense(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorArrayRef tensors);
+@Namespace("c10d") public static native void assertDense(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorVector tensors);
+
+@Namespace("c10d") public static native void assertCPU(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorArrayRef tensors);
+@Namespace("c10d") public static native void assertCPU(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorVector tensors);
+
+@Namespace("c10d") public static native void assertSameDevice(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorArrayRef tensors);
+@Namespace("c10d") public static native void assertSameDevice(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorVector tensors);
+
+@Namespace("c10d") public static native void assertTypeAndSizesMatch(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorArrayRef tensors,
+    @Const @ByRef TensorOptions options,
+    @Const @ByRef LongArrayRef sizes);
+@Namespace("c10d") public static native void assertTypeAndSizesMatch(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorVector tensors,
+    @Const @ByRef TensorOptions options,
+    @ByRef @Cast({"int64_t*", "c10::ArrayRef", "std::vector&"}) @StdVector("int64_t") long... sizes);
+
+@Namespace("c10d") public static native void assertTypeAndSizesMatch(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorArrayRef tensors);
+@Namespace("c10d") public static native void assertTypeAndSizesMatch(
+    @Const @ByRef StringConsumer fn,
+    @Const @ByVal TensorVector tensors);
+
+// Copied from ATen/core/functional.h.
+
+// Copied from torch/csrc/utils/tensor_flatten.h.
+@Namespace("c10d") public static native @ByVal Tensor flattenDenseTensors(@ByVal TensorArrayRef tensors);
+@Namespace("c10d") public static native @ByVal Tensor flattenDenseTensors(@ByVal TensorVector tensors);
+
+@Namespace("c10d") public static native @ByVal Tensor newLikeFlat(
+    @StdVector TensorVector tensors,
+    @Cast("size_t") long deviceIdx);
+
+@Namespace("c10d") public static native @ByVal Tensor newLikeFlat(@ByRef TensorVector tensors);
+
+@Namespace("c10d") public static native @Cast("std::vector*") @StdVector LongVector getSizes(
+    @Const @ByRef TensorVector tensors);
+
+@Namespace("c10d") public static native @StdVector IntPointer getDevices(@Const @ByRef TensorVector tensors);
+
+// For alltoall split size sanity check
+@Namespace("c10d") public static native void checkSplitSizes(
+    @Cast("const std::vector*") @ByRef LongVector split_sizes,
+    @Const @ByRef Tensor tensor,
+    int group_size);
+
+// Compute alltoall lengths and offsets, handling multi-dimension tensors
+
+// `errno` is only meaningful when it fails. E.g., a  successful `fork()` sets
+// `errno` to `EINVAL` in child process on some macos
+// (https://stackoverflow.com/a/20295079), and thus `errno` should really only
+// be inspected if an error occurred.
+//
+// `success_cond` is an expression used to check if an error has happend. So for
+// `fork()`, we can use `SYSCHECK(pid = fork(), pid != -1)`. The function output
+// is stored in variable `__output` and may be used in `success_cond`.
+// #ifdef _WIN32
+// #else
+// #define SYSCHECK(expr, success_cond)
+//   while (true) {
+//     auto __output = (expr);
+//     (void)__output;
+//     if (!(success_cond)) {
+//       if (errno == EINTR) {
+//         continue;
+//       } else if (errno == EAGAIN || errno == EWOULDBLOCK) {
+//         C10_THROW_ERROR(DistNetworkError, "Socket Timeout");
+//       } else {
+//         C10_THROW_ERROR(DistNetworkError, std::strerror(errno));
+//       }
+//     } else {
+//       break;
+//     }
+//   }
+// #endif
+
+// Most functions indicate error by returning `-1`. This is a helper macro for
+// this common case with `SYSCHECK`.
+// Since SOCKET_ERROR = -1 in MSVC, so also leverage SYSCHECK_ERR_RETURN_NEG1
+// #define SYSCHECK_ERR_RETURN_NEG1(expr) SYSCHECK(expr, __output != -1)
+
+
+
+// Send and receive
+
+// send a vector's length and data
+
+// receive a vector as sent in sendVector
+
+// this is only for convenience when sending rvalues
+
+// send a string's length and data
+@Namespace("c10d::tcputil") public static native void sendString(
+    int socket,
+    @StdString BytePointer str,
+    @Cast("bool") boolean moreData/*=false*/);
+@Namespace("c10d::tcputil") public static native void sendString(
+    int socket,
+    @StdString BytePointer str);
+@Namespace("c10d::tcputil") public static native void sendString(
+    int socket,
+    @StdString String str,
+    @Cast("bool") boolean moreData/*=false*/);
+@Namespace("c10d::tcputil") public static native void sendString(
+    int socket,
+    @StdString String str);
+
+// receive a string as sent in sendString
+@Namespace("c10d::tcputil") public static native @StdString BytePointer recvString(int socket);
+
+ // namespace tcputil
+ // namespace c10d
+
+
+// Parsed from torch/csrc/distributed/c10d/Work.hpp
+
+// #pragma once
+
+// #include 
+// #include 
+// #include 
+// #include 
+
+@Namespace("c10d") @MemberGetter public static native @Cast("const char*") BytePointer kSeqNumStoreKey();
+
+@Namespace("c10d") public enum OpType {
+  BROADCAST((byte)(0)),
+  ALLREDUCE((byte)(1)),
+  ALLREDUCE_COALESCED((byte)(2)),
+  REDUCE((byte)(3)),
+  ALLGATHER((byte)(4)),
+  _ALLGATHER_BASE((byte)(5)),
+  ALLGATHER_COALESCED((byte)(6)),
+  GATHER((byte)(7)),
+  SCATTER((byte)(8)),
+  REDUCE_SCATTER((byte)(9)),
+  ALLTOALL_BASE((byte)(10)),
+  ALLTOALL((byte)(11)),
+  SEND((byte)(12)),
+  RECV((byte)(13)),
+  RECVANYSOURCE((byte)(14)),
+  BARRIER((byte)(15)),
+  _REDUCE_SCATTER_BASE((byte)(16)),
+  COALESCED((byte)(17)),
+  _ALLREDUCE_SPARSE((byte)(18)),
+  UNKNOWN((byte)(100));
+
+    public final byte value;
+    private OpType(byte v) { this.value = v; }
+    private OpType(OpType e) { this.value = e.value; }
+    public OpType intern() { for (OpType e : values()) if (e.value == value) return e; return this; }
+    @Override public String toString() { return intern().name(); }
+}
+
+// Converts OpType to human readable string.
+@Namespace("c10d") public static native @StdString BytePointer opTypeToString(OpType opType);
+@Namespace("c10d") public static native @StdString String opTypeToString(@Cast("c10d::OpType") byte opType);
+
+// Whether or not an OP is an p2p op (SEND, RECV, RECVANYSOURCE)
+@Namespace("c10d") public static native @Cast("bool") boolean isP2POp(OpType opType, @Cast("bool") boolean batchP2P/*=false*/);
+@Namespace("c10d") public static native @Cast("bool") boolean isP2POp(OpType opType);
+@Namespace("c10d") public static native @Cast("bool") boolean isP2POp(@Cast("c10d::OpType") byte opType, @Cast("bool") boolean batchP2P/*=false*/);
+@Namespace("c10d") public static native @Cast("bool") boolean isP2POp(@Cast("c10d::OpType") byte opType);
+// Targeting ../Work.java
+
+
+// Targeting ../WorkInfo.java
+
+
+
+ // namespace c10d
+
+
+// Parsed from torch/csrc/distributed/c10d/debug.h
+
+// Copyright (c) Meta Platforms, Inc. and its affiliates.
+// All rights reserved.
+//
+// This source code is licensed under the BSD-style license found in the
+// LICENSE file in the root directory of this source tree.
+
+// #pragma once
+
+// #include 
+
+@Namespace("c10d") public enum DebugLevel { Off(0), Info(1), Detail(2);
+
+    public final int value;
+    private DebugLevel(int v) { this.value = v; }
+    private DebugLevel(DebugLevel e) { this.value = e.value; }
+    public DebugLevel intern() { for (DebugLevel e : values()) if (e.value == value) return e; return this; }
+    @Override public String toString() { return intern().name(); }
+}
+
+@Namespace("c10d") public static native void setDebugLevel(DebugLevel level);
+@Namespace("c10d") public static native void setDebugLevel(@Cast("c10d::DebugLevel") int level);
+
+// Sets the debug level based on the value of the `TORCH_DISTRIBUTED_DEBUG`
+// environment variable.
+@Namespace("c10d") public static native void setDebugLevelFromEnvironment();
+
+@Namespace("c10d") public static native @NoException(true) DebugLevel debug_level();
+
+ // namespace c10d
+
+
+// Parsed from torch/csrc/distributed/c10d/Backend.hpp
+
+// #pragma once
+
+// #include 
+// #include 
+// #include 
+
+// #include 
+// #include 
+
+// #include 
+// #include 
+// #include 
+// #include 
+// Targeting ../DistributedBackend.java
+
+
+
+ // namespace c10d
+
+
+// Parsed from torch/csrc/distributed/c10d/ProcessGroup.hpp
+
+// #pragma once
+
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+
+// #include 
+// #include 
+// #include 
+
+// #include 
+// *************************************************************************
+// PROCESS GROUP collective communication API IS BEING CHANGED BETWEEN
+// versions 1.7 and 1.8.
+// PLEASE DO NOT ADD ANY DEPENDENCIES.
+// SEE RFC: https://github.com/pytorch/pytorch/issues/39662
+// *************************************************************************
+// Targeting ../ProcessGroup.java
+
+
+
+ // namespace c10d
+
+
+// Parsed from torch/csrc/distributed/c10d/comm.hpp
+
+// #pragma once
+
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+
+// Broadcast many tensors to all processes in the process group.
+@Namespace("c10d") public static native void broadcast_coalesced(
+    @IntrusivePtr("c10d::ProcessGroup") @Cast({"", "c10::intrusive_ptr&"}) ProcessGroup process_group,
+    @ByVal TensorArrayRef tensors,
+    @Cast("size_t") long buffer_size,
+    int rank/*=0*/);
+@Namespace("c10d") public static native void broadcast_coalesced(
+    @IntrusivePtr("c10d::ProcessGroup") @Cast({"", "c10::intrusive_ptr&"}) ProcessGroup process_group,
+    @ByVal TensorArrayRef tensors,
+    @Cast("size_t") long buffer_size);
+@Namespace("c10d") public static native void broadcast_coalesced(
+    @IntrusivePtr("c10d::ProcessGroup") @Cast({"", "c10::intrusive_ptr&"}) ProcessGroup process_group,
+    @ByVal TensorVector tensors,
+    @Cast("size_t") long buffer_size,
+    int rank/*=0*/);
+@Namespace("c10d") public static native void broadcast_coalesced(
+    @IntrusivePtr("c10d::ProcessGroup") @Cast({"", "c10::intrusive_ptr&"}) ProcessGroup process_group,
+    @ByVal TensorVector tensors,
+    @Cast("size_t") long buffer_size);
+// Targeting ../GradBucket.java
+
+
+// Targeting ../CommHookInterface.java
+
+
+// This helper function is called both by CppCommHookInterface below and inside
+// reducer.
+@Namespace("c10d::detail") public static native @ByVal Tensor parseCppCommHookResult(@Const @ByRef IValue result);
+
+// Targeting ../ProcessGroupCppCommHookInterface.java
+
+
+
+ // namespace c10d
+
+
+// Parsed from torch/csrc/distributed/c10d/default_comm_hooks.hpp
+
+// #pragma once
+
+// #include 
+// #include 
+
+@Namespace("c10d") public enum BuiltinCommHookType {
+  ALLREDUCE((byte)(1)),
+  FP16_COMPRESS((byte)(2));
+
+    public final byte value;
+    private BuiltinCommHookType(byte v) { this.value = v; }
+    private BuiltinCommHookType(BuiltinCommHookType e) { this.value = e.value; }
+    public BuiltinCommHookType intern() { for (BuiltinCommHookType e : values()) if (e.value == value) return e; return this; }
+    @Override public String toString() { return intern().name(); }
+}
+
+// Almost same as AllReduceCommHook, but without division inside the hook.
+// This enables the optimization of fusing copy and division and saves one scan
+// over all the input parameters, when no communication hook is provided by the
+// user. Only used internally and not released as a public built-in
+// communication hook.
+
+ // namespace c10d
+
+
+// Parsed from c10/util/ApproximateClock.h
+
+// Copyright 2023-present Facebook. All Rights Reserved.
+
+// #pragma once
+
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+
+// #if defined(C10_IOS) && defined(C10_MOBILE)
+// #include  // for gettimeofday()
+// #endif
+
+// #if defined(__i386__) || defined(__x86_64__) || defined(__amd64__)
+// #define C10_RDTSC
+// #if defined(_MSC_VER)
+// #elif defined(__CUDACC__) || defined(__HIPCC__)
+// #elif defined(__clang__)
+// `__rdtsc` is available by default.
+// NB: This has to be first, because Clang will also define `__GNUC__`
+// #elif defined(__GNUC__)
+// #include 
+// #else
+// #undef C10_RDTSC
+// #endif
+// #endif
+
+@Namespace("c10") public static native @Cast("c10::time_t") long getTimeSinceEpoch();
+
+@Namespace("c10") public static native @Cast("c10::time_t") long getTime(@Cast("bool") boolean allow_monotonic/*=false*/);
+@Namespace("c10") public static native @Cast("c10::time_t") long getTime();
+
+// We often do not need to capture true wall times. If a fast mechanism such
+// as TSC is available we can use that instead and convert back to epoch time
+// during post processing. This greatly reduce the clock's contribution to
+// profiling.
+//   http://btorpey.github.io/blog/2014/02/18/clock-sources-in-linux/
+//   https://quick-bench.com/q/r8opkkGZSJMu9wM_XTbDouq-0Io
+// TODO: We should use
+// `https://github.com/google/benchmark/blob/main/src/cycleclock.h`
+// Targeting ../ApproximateClockToUnixTimeConverter.java
+
+
+
+ // namespace c10
+
+
+// Parsed from torch/csrc/distributed/c10d/reducer_timer.hpp
+
+// #pragma once
+// #include 
+// #include 
+@Namespace("c10d") @MemberGetter public static native int kUnsetTime();
+
+@Namespace("c10d") public static native @Cast("int64_t") long current_time_in_nanos();
+// Targeting ../Timer.java
+
+
+ // namespace c10d
+
+
+// Parsed from torch/csrc/distributed/c10d/reducer.hpp
+
+// #pragma once
+
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #ifndef _WIN32
+// #include 
+// #endif
+
+@Namespace("c10d") @MemberGetter public static native int kDefaultFirstBucketBytes();
+@Namespace("c10d") @MemberGetter public static native int kDefaultBucketBytesCap();
+// Collect runtime stats once for every kDDPRuntimeLoggingSampleRate iterations.
+@Namespace("c10d") @MemberGetter public static native int kDDPRuntimeLoggingSampleRate();
+
+// Forward declaration
+// Targeting ../BucketAccumulator.java
+
+
+// Targeting ../Reducer.java
+
+
+
+// This is equivalent to take_tensors but returns indices into the
+// tensor list argument for bucket assignment. Also, it is aware
+// of device placement and will not allow buckets to span devices.
+// The index of tensors[i] assigned to bucket is tensor_indices[i],
+// when tensor_indices is empty, the index of tensors[i] assigned to
+// bucket is i.
+@Namespace("c10d") public static native @ByVal T_SizeTVectorVectorSizeTVector_T compute_bucket_assignment_by_size(
+    @Const @ByRef TensorVector tensors,
+    @Cast("const std::vector*") @ByRef SizeTVector bucket_size,
+    @Const @ByRef(nullValue = "std::vector{}") BoolVector expect_sparse_gradient,
+    @Cast("const std::vector*") @ByRef(nullValue = "std::vector{}") LongVector tensor_indices,
+    @Const @ByRef(nullValue = "std::optional >{}") LoggerOptional logger);
+@Namespace("c10d") public static native @ByVal T_SizeTVectorVectorSizeTVector_T compute_bucket_assignment_by_size(
+    @Const @ByRef TensorVector tensors,
+    @Cast("const std::vector*") @ByRef SizeTVector bucket_size);
+
+// Verify models across all processes are the same as model on rank 0 with
+// respect to no. of params and matching dtype/size/layout.
+@Namespace("c10d") public static native void verify_params_across_processes(
+    @IntrusivePtr("c10d::ProcessGroup") @Cast({"", "c10::intrusive_ptr&"}) ProcessGroup process_group,
+    @Const @ByRef TensorVector params,
+    @Const @ByRef LoggerOptional logger);
+ // namespace c10d
+
+
+// Parsed from torch/csrc/distributed/c10d/ProcessGroupGloo.hpp
+
+// #pragma once
+
+// #ifdef USE_C10D_GLOO
+
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+
+// #include 
+
+// #include 
+// #include 
+// #include 
+// #include 
+
+@Namespace("c10d") @MemberGetter public static native @Cast("const char*") BytePointer GLOO_BACKEND_NAME();
+// Targeting ../ProcessGroupGloo.java
+
+
+
+ // namespace c10d
+
+// #endif // USE_C10D_GLOO
+
+
+// Parsed from torch/csrc/distributed/c10d/PrefixStore.hpp
+
+// #pragma once
+
+// #include 
+// Targeting ../PrefixStore.java
+
+
+
+ // namespace c10d
+
+
+// Parsed from torch/csrc/distributed/c10d/logger.hpp
+
+// #include 
+// #include 
+
+// #include 
+// Targeting ../Logger.java
+
+
+// Targeting ../C10dLoggingData.java
+
+
+// Targeting ../C10dLogger.java
+
+
+
+ // namespace c10d
+
+
 // Parsed from datasets.h
 
 /*
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/global/torch_cuda.java b/pytorch/src/gen/java/org/bytedeco/pytorch/global/torch_cuda.java
index d2b49d9ed6c..204a1b4fd91 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/global/torch_cuda.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/global/torch_cuda.java
@@ -4,12 +4,6 @@
 
 import org.bytedeco.pytorch.cuda.*;
 
-import org.bytedeco.pytorch.*;
-import org.bytedeco.pytorch.cuda.functions.*;
-import org.bytedeco.pytorch.Error;
-import org.bytedeco.pytorch.global.torch.DeviceType;
-import org.bytedeco.pytorch.global.torch.ScalarType;
-import org.bytedeco.pytorch.global.torch.MemoryFormat;
 import org.bytedeco.pytorch.Allocator;
 import java.nio.*;
 import org.bytedeco.javacpp.*;
@@ -18,8 +12,22 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 import org.bytedeco.pytorch.*;
 import static org.bytedeco.pytorch.global.torch.*;
+import org.bytedeco.cuda.cudart.*;
+import static org.bytedeco.cuda.global.cudart.*;
+import org.bytedeco.cuda.cublas.*;
+import static org.bytedeco.cuda.global.cublas.*;
+import org.bytedeco.cuda.cudnn.*;
+import static org.bytedeco.cuda.global.cudnn.*;
+import org.bytedeco.cuda.cusparse.*;
+import static org.bytedeco.cuda.global.cusparse.*;
+import org.bytedeco.cuda.cusolver.*;
+import static org.bytedeco.cuda.global.cusolver.*;
+import org.bytedeco.cuda.cupti.*;
+import static org.bytedeco.cuda.global.cupti.*;
 
 public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
     static { Loader.load(); }
@@ -110,240 +118,79 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
  // namespace c10
 
 
-// Parsed from ATen/cuda/CUDAContextLight.h
+// Parsed from ATen/cudnn/cudnn-wrapper.h
 
 // #pragma once
-// Light-weight version of CUDAContext.h with fewer transitive includes
-
-// #include 
-
-// #include 
-// #include 
-// #include 
-
-// cublasLT was introduced in CUDA 10.1 but we enable only for 11.1 that also
-// added bf16 support
-// #if (!defined(USE_ROCM) && !defined(_MSC_VER)) || (defined(USE_ROCM) && ROCM_VERSION >= 50700)
-// #include 
-// #endif
-
-// #ifdef CUDART_VERSION
-// #include 
-// #endif
-
-// #if defined(USE_ROCM) && ROCM_VERSION >= 50300
-// #include 
-// #endif
-
-// #include 
-// #include 
-
-
-/*
-A common CUDA interface for ATen.
-
-This interface is distinct from CUDAHooks, which defines an interface that links
-to both CPU-only and CUDA builds. That interface is intended for runtime
-dispatch and should be used from files that are included in both CPU-only and
-CUDA builds.
-
-CUDAContext, on the other hand, should be preferred by files only included in
-CUDA builds. It is intended to expose CUDA functionality in a consistent
-manner.
-
-This means there is some overlap between the CUDAContext and CUDAHooks, but
-the choice of which to use is simple: use CUDAContext when in a CUDA-only file,
-use CUDAHooks otherwise.
-
-Note that CUDAContext simply defines an interface with no associated class.
-It is expected that the modules whose functions compose this interface will
-manage their own state. There is only a single CUDA context/state.
-*/
-
-/**
- * DEPRECATED: use device_count() instead
- */
-@Namespace("at::cuda") public static native @Cast("int64_t") long getNumGPUs();
-
-/**
- * CUDA is available if we compiled with CUDA, and there are one or more
- * devices.  If we compiled with CUDA but there is a driver problem, etc.,
- * this function will report CUDA is not available (rather than raise an error.)
- */
-@Namespace("at::cuda") public static native @Cast("bool") boolean is_available();
-
-@Namespace("at::cuda") public static native Pointer getCurrentDeviceProperties();
-
-@Namespace("at::cuda") public static native int warp_size();
-
-@Namespace("at::cuda") public static native Pointer getDeviceProperties(byte device);
-
-@Namespace("at::cuda") public static native @Cast("bool") boolean canDeviceAccessPeer(
-    byte device,
-    byte peer_device);
-
-@Namespace("at::cuda") public static native Allocator getCUDADeviceAllocator();
-
-/* Handles */
-@Namespace("at::cuda") public static native @Cast("cusparseHandle_t") Pointer getCurrentCUDASparseHandle();
-@Namespace("at::cuda") public static native @Cast("cublasHandle_t") Pointer getCurrentCUDABlasHandle();
-// #if (!defined(USE_ROCM) && !defined(_MSC_VER)) || (defined(USE_ROCM) && ROCM_VERSION >= 50700)
 
-// #endif
+// #include 
 
-@Namespace("at::cuda") public static native void clearCublasWorkspaces();
+// #define STRINGIFY(x) #x
+// #define STRING(x) STRINGIFY(x)
 
-// #if defined(CUDART_VERSION) || defined(USE_ROCM) && ROCM_VERSION >= 50300
-@Namespace("at::cuda") public static native @Cast("cusolverDnHandle_t") Pointer getCurrentCUDASolverDnHandle();
+// #if CUDNN_MAJOR < 6
+// #pragma message ("CuDNN v" STRING(CUDNN_MAJOR) " found, but need at least CuDNN v6. You can get the latest version of CuDNN from https://developer.nvidia.com/cudnn or disable CuDNN with USE_CUDNN=0")
+// #pragma message "We strongly encourage you to move to 6.0 and above."
+// #pragma message "This message is intended to annoy you enough to update."
 // #endif
 
- // namespace at::cuda
+// #undef STRINGIFY
+// #undef STRING
 
 
-// Parsed from c10/cuda/CUDAStream.h
+// Parsed from c10/core/impl/GPUTrace.h
 
 // #pragma once
 
-// #include 
-// #include 
-
-// #include 
-
-// #include 
-// #include 
-// #include 
-// #include 
-
-/*
- * Stream pool note.
- *
- * A CUDAStream is an abstraction of an actual cuStream on the GPU. CUDAStreams
- * are backed by cuStreams, but they use several pools to minimize the costs
- * associated with creating, retaining, and destroying cuStreams.
- *
- * There are three pools per device, and a device's pools are lazily created.
- *
- * The first pool contains only the default stream. When the default stream
- * is requested it's returned.
- *
- * The second pool is the "low priority" or "default priority" streams. In
- * HIP builds there is no distinction between streams in this pool and streams
- * in the third pool (below). There are 32 of these streams per device, and
- * when a stream is requested one of these streams is returned round-robin.
- * That is, the first stream requested is at index 0, the second at index 1...
- * to index 31, then index 0 again.
- *
- * This means that if 33 low priority streams are requested, the first and
- * last streams requested are actually the same stream (under the covers)
- * and kernels enqueued on them cannot run concurrently.
- *
- * The third pool is the "high priority" streams. The third pool acts like
- * the second pool except the streams are created with a higher priority.
- *
- * These pools suggest that stream users should prefer many short-lived streams,
- * as the cost of acquiring and releasing streams is effectively zero. If
- * many longer-lived streams are required in performance critical scenarios
- * then the functionality here may need to be extended to allow, for example,
- * "reserving" a subset of the pool so that other streams do not accidentally
- * overlap the performance critical streams.
- *
- * Note: although the notion of "current stream for device" is thread local
- * (every OS thread has a separate current stream, as one might expect),
- * the stream pool is global across all threads; stream 0 is always stream 0
- * no matter which thread you use it on.  Multiple threads can synchronize
- * on the same stream.  Although the CUDA documentation is not very clear
- * on the matter, streams are thread safe; e.g., it is safe to enqueue
- * a kernel on the same stream from two different threads.
- */
-
-@Namespace("c10::cuda") @MemberGetter public static native int max_compile_time_stream_priorities();
-public static final int max_compile_time_stream_priorities = max_compile_time_stream_priorities();
-// Targeting ../cuda/CUDAStream.java
-
-
-
-/**
- * Get a new stream from the CUDA stream pool.  You can think of this
- * as "creating" a new stream, but no such creation actually happens;
- * instead, streams are preallocated from the pool and returned in a
- * round-robin fashion.
- *
- * You can request a stream from the high priority pool by setting
- * isHighPriority to true, or a stream for a specific device by setting device
- * (defaulting to the current CUDA stream.)
- */
-@Namespace("c10::cuda") public static native @ByVal CUDAStream getStreamFromPool(@Cast("const bool") boolean isHighPriority/*=false*/, byte device/*=-1*/);
-@Namespace("c10::cuda") public static native @ByVal CUDAStream getStreamFromPool();
-// no default priority to disambiguate overloads
-@Namespace("c10::cuda") public static native @ByVal CUDAStream getStreamFromPool(int priority, byte device/*=-1*/);
-@Namespace("c10::cuda") public static native @ByVal CUDAStream getStreamFromPool(int priority);
-
-/**
- * Get a CUDAStream from a externally allocated one.
- *
- * This is mainly for interoperability with different libraries where we
- * want to operate on a non-torch allocated stream for data exchange or similar
- * purposes
- */
-@Namespace("c10::cuda") public static native @ByVal CUDAStream getStreamFromExternal(@Cast("cudaStream_t") Pointer ext_stream, byte device_index);
-
-/**
- * Get the default CUDA stream, for the passed CUDA device, or for the
- * current device if no device index is passed.  The default stream is
- * where most computation occurs when you aren't explicitly using
- * streams.
- */
-@Namespace("c10::cuda") public static native @ByVal CUDAStream getDefaultCUDAStream(byte device_index/*=-1*/);
-@Namespace("c10::cuda") public static native @ByVal CUDAStream getDefaultCUDAStream();
-
-/**
- * Get the current CUDA stream, for the passed CUDA device, or for the
- * current device if no device index is passed.  The current CUDA stream
- * will usually be the default CUDA stream for the device, but it may
- * be different if someone called 'setCurrentCUDAStream' or used 'StreamGuard'
- * or 'CUDAStreamGuard'.
- */
-@Namespace("c10::cuda") public static native @ByVal CUDAStream getCurrentCUDAStream(byte device_index/*=-1*/);
-@Namespace("c10::cuda") public static native @ByVal CUDAStream getCurrentCUDAStream();
-
-/**
- * Set the current stream on the device of the passed in stream to be
- * the passed in stream.  Yes, you read that right: this function
- * has *nothing* to do with the current device: it toggles the current
- * stream of the device of the passed stream.
- *
- * Confused?  Avoid using this function; prefer using 'CUDAStreamGuard' instead
- * (which will switch both your current device and current stream in the way you
- * expect, and reset it back to its original state afterwards).
- */
-@Namespace("c10::cuda") public static native void setCurrentCUDAStream(@ByVal CUDAStream stream);
-
-@Namespace("c10::cuda") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer stream, @Const @ByRef CUDAStream s);
+// #include 
 
- // namespace c10::cuda
- // namespace std
+ // namespace c10::impl
 
 
-// Parsed from ATen/cuda/CUDAContext.h
+// Parsed from c10/cuda/CUDAMacros.h
 
 // #pragma once
 
-// #include 
+// #ifndef C10_USING_CUSTOM_GENERATED_MACROS
 
-// Preserved for BC, as many files depend on these includes
-// #include 
-// #include 
-// #include 
-// #include 
+// We have not yet modified the AMD HIP build to generate this file so
+// we add an extra option to specifically ignore it.
+// #ifndef C10_CUDA_NO_CMAKE_CONFIGURE_FILE
+// #include 
+// #endif // C10_CUDA_NO_CMAKE_CONFIGURE_FILE
 
+// #endif
 
-// Parsed from c10/core/impl/GPUTrace.h
+// See c10/macros/Export.h for a detailed explanation of what the function
+// of these macros are.  We need one set of macros for every separate library
+// we build.
 
-// #pragma once
+// #ifdef _WIN32
+// #else // _WIN32
+// #if defined(__GNUC__)
+// #define C10_CUDA_EXPORT __attribute__((__visibility__("default")))
+// #else // defined(__GNUC__)
+// #define C10_CUDA_EXPORT
+// #endif // defined(__GNUC__)
+// #define C10_CUDA_IMPORT C10_CUDA_EXPORT
+// #endif // _WIN32
 
-// #include 
+// This one is being used by libc10_cuda.so
+// #ifdef C10_CUDA_BUILD_MAIN_LIB
+// #define C10_CUDA_API C10_CUDA_EXPORT
+// #else
+// #define C10_CUDA_API C10_CUDA_IMPORT
+// #endif
 
- // namespace c10::impl
+/**
+ * The maximum number of GPUs that we recognizes. Increasing this beyond the
+ * initial limit of 16 broke Caffe2 testing, hence the ifdef guards.
+ * This value cannot be more than 128 because our DeviceIndex is a uint8_t.
+o */
+// #ifdef FBCODE_CAFFE2
+// fbcode depends on this value being 16
+public static final int C10_COMPILE_TIME_MAX_GPUS = 16;
+// #else
+// #endif
 
 
 // Parsed from c10/cuda/CUDADeviceAssertionHost.h
@@ -396,63 +243,6 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
 // #define TORCH_DSA_KERNEL_ARGS_PASS assertions_data, assertion_caller_id
 
 
-// Parsed from c10/cuda/CUDAMacros.h
-
-// #pragma once
-
-// #ifndef C10_USING_CUSTOM_GENERATED_MACROS
-
-// We have not yet modified the AMD HIP build to generate this file so
-// we add an extra option to specifically ignore it.
-// #ifndef C10_CUDA_NO_CMAKE_CONFIGURE_FILE
-// #include 
-// #endif // C10_CUDA_NO_CMAKE_CONFIGURE_FILE
-
-// #endif
-
-// See c10/macros/Export.h for a detailed explanation of what the function
-// of these macros are.  We need one set of macros for every separate library
-// we build.
-
-// #ifdef _WIN32
-// #else // _WIN32
-// #if defined(__GNUC__)
-// #define C10_CUDA_EXPORT __attribute__((__visibility__("default")))
-// #else // defined(__GNUC__)
-// #define C10_CUDA_EXPORT
-// #endif // defined(__GNUC__)
-// #define C10_CUDA_IMPORT C10_CUDA_EXPORT
-// #endif // _WIN32
-
-// This one is being used by libc10_cuda.so
-// #ifdef C10_CUDA_BUILD_MAIN_LIB
-// #define C10_CUDA_API C10_CUDA_EXPORT
-// #else
-// #define C10_CUDA_API C10_CUDA_IMPORT
-// #endif
-
-/**
- * The maximum number of GPUs that we recognizes. Increasing this beyond the
- * initial limit of 16 broke Caffe2 testing, hence the ifdef guards.
- * This value cannot be more than 128 because our DeviceIndex is a uint8_t.
-o */
-// #ifdef FBCODE_CAFFE2
-// fbcode depends on this value being 16
-public static final int C10_COMPILE_TIME_MAX_GPUS = 16;
-// #else
-// #endif
-
-
-// Parsed from c10/cuda/impl/cuda_cmake_macros.h
-
-// #pragma once
-
-// Automatically generated header file for the C10 CUDA library.  Do not
-// include this file directly.  Instead, include c10/cuda/CUDAMacros.h
-
-// #define C10_CUDA_BUILD_SHARED_LIBS
-
-
 // Parsed from c10/cuda/CUDAMiscFunctions.h
 
 // #pragma once
@@ -477,9 +267,17 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
 // #include 
 // #include 
 // #include 
-// Targeting ../cuda/CUDAError.java
 
+// Note [CHECK macro]
+// ~~~~~~~~~~~~~~~~~~
+// This is a macro so that AT_ERROR can get accurate __LINE__
+// and __FILE__ information.  We could split this into a short
+// macro and a function implementation if we pass along __LINE__
+// and __FILE__, but no one has found this worth doing.
 
+// Used to denote errors from CUDA framework.
+// This needs to be declared here instead util/Exception.h for proper conversion
+// during hipify.
  // namespace c10
 
 // #define C10_CUDA_CHECK(EXPR)
@@ -611,40 +409,245 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
 
 @Namespace("c10::cuda") public static native @Cast("cudaError_t") int MaybeSetDevice(byte device);
 
-@Namespace("c10::cuda") public static native byte ExchangeDevice(byte device);
+@Namespace("c10::cuda") public static native byte ExchangeDevice(byte device);
+
+@Namespace("c10::cuda") public static native byte MaybeExchangeDevice(byte device);
+
+@Namespace("c10::cuda") public static native void SetTargetDevice();
+
+@Namespace("c10::cuda") public enum SyncDebugMode { L_DISABLED(0), L_WARN(1), L_ERROR(2);
+
+    public final int value;
+    private SyncDebugMode(int v) { this.value = v; }
+    private SyncDebugMode(SyncDebugMode e) { this.value = e.value; }
+    public SyncDebugMode intern() { for (SyncDebugMode e : values()) if (e.value == value) return e; return this; }
+    @Override public String toString() { return intern().name(); }
+}
+// Targeting ../cuda/WarningState.java
+
+
+
+@Namespace("c10::cuda") public static native @ByRef WarningState warning_state();
+// the subsequent functions are defined in the header because for performance
+// reasons we want them to be inline
+@Namespace("c10::cuda") public static native void memcpy_and_sync(
+    Pointer dst,
+    @Const Pointer src,
+    @Cast("int64_t") long nbytes,
+    @Cast("cudaMemcpyKind") int kind,
+    CUstream_st stream);
+
+@Namespace("c10::cuda") public static native void stream_synchronize(CUstream_st stream);
+
+@Namespace("c10::cuda") public static native @Cast("bool") boolean hasPrimaryContext(byte device_index);
+@Namespace("c10::cuda") public static native @ByVal ByteOptional getDeviceIndexWithPrimaryContext();
+
+ // namespace c10::cuda
+
+
+// Parsed from ATen/cuda/CUDAContextLight.h
+
+// #pragma once
+// Light-weight version of CUDAContext.h with fewer transitive includes
+
+// #include 
+
+// #include 
+// #include 
+// #include 
+
+// cublasLT was introduced in CUDA 10.1 but we enable only for 11.1 that also
+// added bf16 support
+// #include 
+
+// #ifdef CUDART_VERSION
+// #include 
+// #endif
+
+// #if defined(USE_ROCM)
+// #endif
+
+// #include 
+// #include 
+
+
+/*
+A common CUDA interface for ATen.
+
+This interface is distinct from CUDAHooks, which defines an interface that links
+to both CPU-only and CUDA builds. That interface is intended for runtime
+dispatch and should be used from files that are included in both CPU-only and
+CUDA builds.
+
+CUDAContext, on the other hand, should be preferred by files only included in
+CUDA builds. It is intended to expose CUDA functionality in a consistent
+manner.
+
+This means there is some overlap between the CUDAContext and CUDAHooks, but
+the choice of which to use is simple: use CUDAContext when in a CUDA-only file,
+use CUDAHooks otherwise.
+
+Note that CUDAContext simply defines an interface with no associated class.
+It is expected that the modules whose functions compose this interface will
+manage their own state. There is only a single CUDA context/state.
+*/
+
+/**
+ * DEPRECATED: use device_count() instead
+ */
+@Namespace("at::cuda") public static native @Cast("int64_t") long getNumGPUs();
+
+/**
+ * CUDA is available if we compiled with CUDA, and there are one or more
+ * devices.  If we compiled with CUDA but there is a driver problem, etc.,
+ * this function will report CUDA is not available (rather than raise an error.)
+ */
+@Namespace("at::cuda") public static native @Cast("bool") boolean is_available();
+
+@Namespace("at::cuda") public static native cudaDeviceProp getCurrentDeviceProperties();
+
+@Namespace("at::cuda") public static native int warp_size();
+
+@Namespace("at::cuda") public static native cudaDeviceProp getDeviceProperties(byte device);
+
+@Namespace("at::cuda") public static native @Cast("bool") boolean canDeviceAccessPeer(
+    byte device,
+    byte peer_device);
+
+@Namespace("at::cuda") public static native Allocator getCUDADeviceAllocator();
+
+/* Handles */
+@Namespace("at::cuda") public static native cusparseContext getCurrentCUDASparseHandle();
+@Namespace("at::cuda") public static native cublasContext getCurrentCUDABlasHandle();
+
+
+@Namespace("at::cuda") public static native void clearCublasWorkspaces();
+
+// #if defined(CUDART_VERSION) || defined(USE_ROCM)
+@Namespace("at::cuda") public static native cusolverDnContext getCurrentCUDASolverDnHandle();
+// #endif
+
+ // namespace at::cuda
+
+
+// Parsed from c10/cuda/CUDAStream.h
+
+// #pragma once
+
+// #include 
+
+// #include 
+// #include 
+// #include 
+// #include 
+
+/*
+ * Stream pool note.
+ *
+ * A CUDAStream is an abstraction of an actual cuStream on the GPU. CUDAStreams
+ * are backed by cuStreams, but they use several pools to minimize the costs
+ * associated with creating, retaining, and destroying cuStreams.
+ *
+ * There are three pools per device, and a device's pools are lazily created.
+ *
+ * The first pool contains only the default stream. When the default stream
+ * is requested it's returned.
+ *
+ * The second pool is the "low priority" or "default priority" streams. In
+ * HIP builds there is no distinction between streams in this pool and streams
+ * in the third pool (below). There are 32 of these streams per device, and
+ * when a stream is requested one of these streams is returned round-robin.
+ * That is, the first stream requested is at index 0, the second at index 1...
+ * to index 31, then index 0 again.
+ *
+ * This means that if 33 low priority streams are requested, the first and
+ * last streams requested are actually the same stream (under the covers)
+ * and kernels enqueued on them cannot run concurrently.
+ *
+ * The third pool is the "high priority" streams. The third pool acts like
+ * the second pool except the streams are created with a higher priority.
+ *
+ * These pools suggest that stream users should prefer many short-lived streams,
+ * as the cost of acquiring and releasing streams is effectively zero. If
+ * many longer-lived streams are required in performance critical scenarios
+ * then the functionality here may need to be extended to allow, for example,
+ * "reserving" a subset of the pool so that other streams do not accidentally
+ * overlap the performance critical streams.
+ *
+ * Note: although the notion of "current stream for device" is thread local
+ * (every OS thread has a separate current stream, as one might expect),
+ * the stream pool is global across all threads; stream 0 is always stream 0
+ * no matter which thread you use it on.  Multiple threads can synchronize
+ * on the same stream.  Although the CUDA documentation is not very clear
+ * on the matter, streams are thread safe; e.g., it is safe to enqueue
+ * a kernel on the same stream from two different threads.
+ */
+
 
-@Namespace("c10::cuda") public static native byte MaybeExchangeDevice(byte device);
+// Targeting ../cuda/CUDAStream.java
 
-@Namespace("c10::cuda") public static native void SetTargetDevice();
 
-@Namespace("c10::cuda") public enum SyncDebugMode { L_DISABLED(0), L_WARN(1), L_ERROR(2);
 
-    public final int value;
-    private SyncDebugMode(int v) { this.value = v; }
-    private SyncDebugMode(SyncDebugMode e) { this.value = e.value; }
-    public SyncDebugMode intern() { for (SyncDebugMode e : values()) if (e.value == value) return e; return this; }
-    @Override public String toString() { return intern().name(); }
-}
-// Targeting ../cuda/WarningState.java
+/**
+ * Get a new stream from the CUDA stream pool.  You can think of this
+ * as "creating" a new stream, but no such creation actually happens;
+ * instead, streams are preallocated from the pool and returned in a
+ * round-robin fashion.
+ *
+ * You can request a stream from the high priority pool by setting
+ * isHighPriority to true, or a stream for a specific device by setting device
+ * (defaulting to the current CUDA stream.)
+ */
+@Namespace("c10::cuda") public static native @ByVal CUDAStream getStreamFromPool(@Cast("const bool") boolean isHighPriority/*=false*/, byte device/*=-1*/);
+@Namespace("c10::cuda") public static native @ByVal CUDAStream getStreamFromPool();
+// no default priority to disambiguate overloads
+@Namespace("c10::cuda") public static native @ByVal CUDAStream getStreamFromPool(int priority, byte device/*=-1*/);
+@Namespace("c10::cuda") public static native @ByVal CUDAStream getStreamFromPool(int priority);
 
+/**
+ * Get a CUDAStream from a externally allocated one.
+ *
+ * This is mainly for interoperability with different libraries where we
+ * want to operate on a non-torch allocated stream for data exchange or similar
+ * purposes
+ */
+@Namespace("c10::cuda") public static native @ByVal CUDAStream getStreamFromExternal(CUstream_st ext_stream, byte device_index);
 
+/**
+ * Get the default CUDA stream, for the passed CUDA device, or for the
+ * current device if no device index is passed.  The default stream is
+ * where most computation occurs when you aren't explicitly using
+ * streams.
+ */
+@Namespace("c10::cuda") public static native @ByVal CUDAStream getDefaultCUDAStream(byte device_index/*=-1*/);
+@Namespace("c10::cuda") public static native @ByVal CUDAStream getDefaultCUDAStream();
 
-@Namespace("c10::cuda") public static native @ByRef WarningState warning_state();
-// the subsequent functions are defined in the header because for performance
-// reasons we want them to be inline
-@Namespace("c10::cuda") public static native void memcpy_and_sync(
-    Pointer dst,
-    @Const Pointer src,
-    @Cast("int64_t") long nbytes,
-    @Cast("cudaMemcpyKind") int kind,
-    @Cast("cudaStream_t") Pointer stream);
+/**
+ * Get the current CUDA stream, for the passed CUDA device, or for the
+ * current device if no device index is passed.  The current CUDA stream
+ * will usually be the default CUDA stream for the device, but it may
+ * be different if someone called 'setCurrentCUDAStream' or used 'StreamGuard'
+ * or 'CUDAStreamGuard'.
+ */
+@Namespace("c10::cuda") public static native @ByVal CUDAStream getCurrentCUDAStream(byte device_index/*=-1*/);
+@Namespace("c10::cuda") public static native @ByVal CUDAStream getCurrentCUDAStream();
 
-@Namespace("c10::cuda") public static native void stream_synchronize(@Cast("cudaStream_t") Pointer stream);
+/**
+ * Set the current stream on the device of the passed in stream to be
+ * the passed in stream.  Yes, you read that right: this function
+ * has *nothing* to do with the current device: it toggles the current
+ * stream of the device of the passed stream.
+ *
+ * Confused?  Avoid using this function; prefer using 'CUDAStreamGuard' instead
+ * (which will switch both your current device and current stream in the way you
+ * expect, and reset it back to its original state afterwards).
+ */
+@Namespace("c10::cuda") public static native void setCurrentCUDAStream(@ByVal CUDAStream stream);
 
-@Namespace("c10::cuda") public static native @Cast("bool") boolean hasPrimaryContext(byte device_index);
-@Namespace("c10::cuda") public static native @ByVal ByteOptional getDeviceIndexWithPrimaryContext();
+@Namespace("c10::cuda") public static native @Cast("std::ostream*") @ByRef @Name("operator <<") Pointer shiftLeft(@Cast("std::ostream*") @ByRef Pointer stream, @Const @ByRef CUDAStream s);
 
  // namespace c10::cuda
+ // namespace std
 
 
 // Parsed from ATen/cuda/Exceptions.h
@@ -662,9 +665,6 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
 // #include 
 // #include 
 // #include 
-// Targeting ../cuda/CuDNNError.java
-
-
 
   // namespace c10
 
@@ -813,23 +813,17 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
 //   } while (0)
 
 
-// Parsed from ATen/cudnn/cudnn-wrapper.h
+// Parsed from ATen/cuda/CUDAContext.h
 
 // #pragma once
 
-// #include 
-
-// #define STRINGIFY(x) #x
-// #define STRING(x) STRINGIFY(x)
-
-// #if CUDNN_MAJOR < 6
-// #pragma message ("CuDNN v" STRING(CUDNN_MAJOR) " found, but need at least CuDNN v6. You can get the latest version of CuDNN from https://developer.nvidia.com/cudnn or disable CuDNN with USE_CUDNN=0")
-// #pragma message "We strongly encourage you to move to 6.0 and above."
-// #pragma message "This message is intended to annoy you enough to update."
-// #endif
+// #include 
 
-// #undef STRINGIFY
-// #undef STRING
+// Preserved for BC, as many files depend on these includes
+// #include 
+// #include 
+// #include 
+// #include 
 
 
 // Parsed from ATen/cuda/ATenCUDAGeneral.h
@@ -845,6 +839,17 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
 // Use TORCH_CUDA_CPP_API or TORCH_CUDA_CU_API for exports from this folder
 
 
+// Parsed from ATen/cudnn/Handle.h
+
+// #pragma once
+
+// #include 
+// #include 
+
+@Namespace("at::native") public static native cudnnContext getCudnnHandle();
+ // namespace at::native
+
+
 // Parsed from ATen/cudnn/Utils.h
 
 // #pragma once
@@ -863,17 +868,6 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
 
 
 
-// Parsed from ATen/cudnn/Handle.h
-
-// #pragma once
-
-// #include 
-// #include 
-
-@Namespace("at::native") public static native @Cast("cudnnHandle_t") Pointer getCudnnHandle();
- // namespace at::native
-
-
 // Parsed from c10/cuda/CUDAGraphsC10Utils.h
 
 // #pragma once
@@ -890,12 +884,9 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
 // Targeting ../cuda/CUDAStreamCaptureModeGuard.java
 
 
-// #endif
 
-// #if !defined(USE_ROCM) || ROCM_VERSION >= 50300
 // Protects against enum cudaStreamCaptureStatus implementation changes.
 // Some compilers seem not to like static_assert without the messages.
-// #endif
 
 
 
@@ -907,59 +898,6 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
  // namespace c10::cuda
 
 
-// Parsed from c10/util/ApproximateClock.h
-
-// Copyright 2023-present Facebook. All Rights Reserved.
-
-// #pragma once
-
-// #include 
-// #include 
-// #include 
-// #include 
-// #include 
-// #include 
-// #include 
-// #include 
-
-// #if defined(C10_IOS) && defined(C10_MOBILE)
-// #include  // for gettimeofday()
-// #endif
-
-// #if defined(__i386__) || defined(__x86_64__) || defined(__amd64__)
-// #define C10_RDTSC
-// #if defined(_MSC_VER)
-// #elif defined(__CUDACC__) || defined(__HIPCC__)
-// #elif defined(__clang__)
-// `__rdtsc` is available by default.
-// NB: This has to be first, because Clang will also define `__GNUC__`
-// #elif defined(__GNUC__)
-// #include 
-// #else
-// #undef C10_RDTSC
-// #endif
-// #endif
-
-@Namespace("c10") public static native @Cast("c10::time_t") long getTimeSinceEpoch();
-
-@Namespace("c10") public static native @Cast("c10::time_t") long getTime(@Cast("bool") boolean allow_monotonic/*=false*/);
-@Namespace("c10") public static native @Cast("c10::time_t") long getTime();
-
-// We often do not need to capture true wall times. If a fast mechanism such
-// as TSC is available we can use that instead and convert back to epoch time
-// during post processing. This greatly reduce the clock's contribution to
-// profiling.
-//   http://btorpey.github.io/blog/2014/02/18/clock-sources-in-linux/
-//   https://quick-bench.com/q/r8opkkGZSJMu9wM_XTbDouq-0Io
-// TODO: We should use
-// `https://github.com/google/benchmark/blob/main/src/cycleclock.h`
-// Targeting ../cuda/ApproximateClockToUnixTimeConverter.java
-
-
-
- // namespace c10
-
-
 // Parsed from c10/cuda/CUDACachingAllocator.h
 
 // #pragma once
@@ -1163,6 +1101,49 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
  // namespace c10::cuda::impl
 
 
+// Parsed from c10/cuda/CUDAGuard.h
+
+// #pragma once
+
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// Targeting ../cuda/CUDAGuard.java
+
+
+
+/** A variant of OptionalDeviceGuard that is specialized for CUDA.  See
+ *  CUDAGuard for when you can use this. */
+// Targeting ../cuda/CUDAStreamGuard.java
+
+
+
+/** A variant of OptionalStreamGuard that is specialized for CUDA.  See
+ *  CUDAGuard for when you can use this. */
+// Targeting ../cuda/CUDAMultiStreamGuard.java
+
+
+
+ // namespace c10::cuda
+
+
+// Parsed from ATen/cudnn/Types.h
+
+// #pragma once
+
+// #include 
+// #include 
+
+@Namespace("at::native") public static native @Cast("cudnnDataType_t") int getCudnnDataTypeFromScalarType(ScalarType dtype);
+
+
+
+
+  // namespace at::cudnn
+
+
 // Parsed from ATen/cudnn/Descriptors.h
 
 // #pragma once
@@ -1243,49 +1224,27 @@ public class torch_cuda extends org.bytedeco.pytorch.presets.torch_cuda {
   // namespace
 
 
-// Parsed from ATen/cudnn/Types.h
-
-// #pragma once
-
-// #include 
-// #include 
-
-@Namespace("at::native") public static native @Cast("cudnnDataType_t") int getCudnnDataTypeFromScalarType(ScalarType dtype);
-
-
-
-
-  // namespace at::cudnn
-
-
-// Parsed from c10/cuda/CUDAGuard.h
+// Parsed from ATen/cuda/CUDAEvent.h
 
 // #pragma once
 
-// #include 
-// #include 
-// #include 
-// #include 
-// #include 
-
-// #include 
-// Targeting ../cuda/CUDAGuard.java
-
-
-
-/** A variant of OptionalDeviceGuard that is specialized for CUDA.  See
- *  CUDAGuard for when you can use this. */
-// Targeting ../cuda/CUDAStreamGuard.java
-
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
+// #include 
 
+// #include 
 
-/** A variant of OptionalStreamGuard that is specialized for CUDA.  See
- *  CUDAGuard for when you can use this. */
-// Targeting ../cuda/CUDAMultiStreamGuard.java
+// #include 
+// #include 
+// Targeting ../cuda/CUDAEvent.java
 
 
 
- // namespace c10::cuda
+ // namespace at::cuda
 
 
 // Parsed from torch/csrc/inductor/aoti_runner/model_container_runner_cuda.h
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Address.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Address.java
new file mode 100644
index 00000000000..8199c3ef721
--- /dev/null
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Address.java
@@ -0,0 +1,31 @@
+// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
+
+package org.bytedeco.pytorch.gloo;
+
+import java.nio.*;
+import org.bytedeco.javacpp.*;
+import org.bytedeco.javacpp.annotation.*;
+
+import static org.bytedeco.javacpp.presets.javacpp.*;
+import static org.bytedeco.openblas.global.openblas_nolapack.*;
+import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
+import static org.bytedeco.pytorch.global.torch.*;
+
+import static org.bytedeco.pytorch.global.gloo.*;
+
+
+@Namespace("gloo::transport") @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class Address extends Pointer {
+    static { Loader.load(); }
+    /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+    public Address(Pointer p) { super(p); }
+
+  // Upper bound for an address' byte representation.
+
+  public native @StdString BytePointer str();
+
+  public native @ByVal @Cast("std::vector*") ByteVector bytes();
+}
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/OutOfMemoryError.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Algorithm.java
similarity index 54%
rename from pytorch/src/gen/java/org/bytedeco/pytorch/OutOfMemoryError.java
rename to pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Algorithm.java
index 12d5a6198a5..475ac38cc51 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/OutOfMemoryError.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Algorithm.java
@@ -1,12 +1,7 @@
 // Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
 
-package org.bytedeco.pytorch;
+package org.bytedeco.pytorch.gloo;
 
-import org.bytedeco.pytorch.Allocator;
-import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
-import org.bytedeco.pytorch.Module;
-import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
 import org.bytedeco.javacpp.*;
 import org.bytedeco.javacpp.annotation.*;
@@ -14,14 +9,20 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
-
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
 import static org.bytedeco.pytorch.global.torch.*;
 
+import static org.bytedeco.pytorch.global.gloo.*;
+
 
-@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
-public class OutOfMemoryError extends Error {
+@Namespace("gloo") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class Algorithm extends Pointer {
     static { Loader.load(); }
     /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
-    public OutOfMemoryError(Pointer p) { super(p); }
+    public Algorithm(Pointer p) { super(p); }
+
 
+  public native void run();
 }
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Buffer.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Buffer.java
new file mode 100644
index 00000000000..45c3cb82f66
--- /dev/null
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Buffer.java
@@ -0,0 +1,37 @@
+// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
+
+package org.bytedeco.pytorch.gloo;
+
+import java.nio.*;
+import org.bytedeco.javacpp.*;
+import org.bytedeco.javacpp.annotation.*;
+
+import static org.bytedeco.javacpp.presets.javacpp.*;
+import static org.bytedeco.openblas.global.openblas_nolapack.*;
+import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
+import static org.bytedeco.pytorch.global.torch.*;
+
+import static org.bytedeco.pytorch.global.gloo.*;
+
+
+@Namespace("gloo::transport") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class Buffer extends Pointer {
+    static { Loader.load(); }
+    /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+    public Buffer(Pointer p) { super(p); }
+
+
+  public native void setDebug(@Cast("bool") boolean debug);
+
+  public native void send(@Cast("size_t") long offset, @Cast("size_t") long length, @Cast("size_t") long roffset/*=0*/);
+  public native void send(@Cast("size_t") long offset, @Cast("size_t") long length);
+
+  // Send entire buffer by default
+  public native void send();
+
+  public native void waitRecv();
+  public native void waitSend();
+}
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Device.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Device.java
new file mode 100644
index 00000000000..80bdee3301a
--- /dev/null
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Device.java
@@ -0,0 +1,48 @@
+// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
+
+package org.bytedeco.pytorch.gloo;
+
+import java.nio.*;
+import org.bytedeco.javacpp.*;
+import org.bytedeco.javacpp.annotation.*;
+
+import static org.bytedeco.javacpp.presets.javacpp.*;
+import static org.bytedeco.openblas.global.openblas_nolapack.*;
+import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
+import static org.bytedeco.pytorch.global.torch.*;
+
+import static org.bytedeco.pytorch.global.gloo.*;
+
+
+// The device abstraction can be considered as a factory for all
+// communication pairs. A communication pair can be associated with
+// send and receive buffers. Send buffers serve as the source for one
+// sided writes and receive buffers serve as the target of one sided
+// writes. Both ends of the pair can create either type of buffer, as
+// long as it is paired with the opposite type on the remote end of
+// the pair; every receive buffer must be paired with a corresponding
+// send buffer and vice versa. The device abstraction may start a
+// background thread to handle I/O multiplexing (not configurable).
+@Namespace("gloo::transport") @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class Device extends Pointer {
+    static { Loader.load(); }
+    /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+    public Device(Pointer p) { super(p); }
+
+
+  public native @StdString BytePointer str();
+
+  public native @StdString BytePointer getPCIBusID();
+
+  public native int getInterfaceSpeed();
+
+  public native @Cast("bool") boolean hasGPUDirect();
+
+  // Factory function to create transport context. A single device may
+  // service multiple contexts, with no constraints on this process
+  // its rank or the context size.
+  public native @SharedPtr("gloo::transport::Context") @ByVal TransportContext createContext(int rank, int size);
+}
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/IStore.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/IStore.java
new file mode 100644
index 00000000000..1eac153f5f5
--- /dev/null
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/IStore.java
@@ -0,0 +1,36 @@
+// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
+
+package org.bytedeco.pytorch.gloo;
+
+import java.nio.*;
+import org.bytedeco.javacpp.*;
+import org.bytedeco.javacpp.annotation.*;
+
+import static org.bytedeco.javacpp.presets.javacpp.*;
+import static org.bytedeco.openblas.global.openblas_nolapack.*;
+import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
+import static org.bytedeco.pytorch.global.torch.*;
+
+import static org.bytedeco.pytorch.global.gloo.*;
+
+
+@Namespace("gloo") @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class IStore extends Pointer {
+    static { Loader.load(); }
+    /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+    public IStore(Pointer p) { super(p); }
+
+
+  public native void set(@StdString BytePointer key, @Cast("const std::vector*") @ByRef ByteVector data);
+  public native void set(@StdString String key, @Cast("const std::vector*") @ByRef ByteVector data);
+
+  public native @ByVal @Cast("std::vector*") ByteVector get(@StdString BytePointer key);
+  public native @ByVal @Cast("std::vector*") ByteVector get(@StdString String key);
+
+  public native @Name("wait") void _wait(
+      @Const @ByRef StringVector keys,
+      @Const @ByRef Milliseconds timeout);
+}
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Pair.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Pair.java
new file mode 100644
index 00000000000..2cfa0b3668d
--- /dev/null
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Pair.java
@@ -0,0 +1,66 @@
+// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
+
+package org.bytedeco.pytorch.gloo;
+
+import java.nio.*;
+import org.bytedeco.javacpp.*;
+import org.bytedeco.javacpp.annotation.*;
+
+import static org.bytedeco.javacpp.presets.javacpp.*;
+import static org.bytedeco.openblas.global.openblas_nolapack.*;
+import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
+import static org.bytedeco.pytorch.global.torch.*;
+
+import static org.bytedeco.pytorch.global.gloo.*;
+
+
+@Namespace("gloo::transport") @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class Pair extends Pointer {
+    static { Loader.load(); }
+    /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+    public Pair(Pointer p) { super(p); }
+
+
+  public native @Const @ByRef @Name("address") Address _address();
+
+  public native void connect(@Cast("const std::vector*") @ByRef ByteVector bytes);
+
+  public native @Name("close") void _close();
+
+  public native void setSync(@Cast("bool") boolean enable, @Cast("bool") boolean busyPoll);
+
+  public native @UniquePtr Buffer createSendBuffer(int slot, Pointer ptr, @Cast("size_t") long size);
+
+  public native @UniquePtr Buffer createRecvBuffer(int slot, Pointer ptr, @Cast("size_t") long size);
+
+  // Send from the specified buffer to remote side of pair.
+  public native void send(
+        UnboundBuffer buf,
+        @Cast("uint64_t") long tag,
+        @Cast("size_t") long offset/*=0*/,
+        @Cast("size_t") long nbytes/*=0*/);
+  public native void send(
+        UnboundBuffer buf,
+        @Cast("uint64_t") long tag);
+
+  // Receive into the specified buffer from the remote side of pair.
+  public native void recv(
+        UnboundBuffer buf,
+        @Cast("uint64_t") long tag,
+        @Cast("size_t") long offset/*=0*/,
+        @Cast("size_t") long nbytes/*=0*/);
+  public native void recv(
+        UnboundBuffer buf,
+        @Cast("uint64_t") long tag);
+
+  // Sets the local rank of the process to be localRank
+  // (See below for description of local rank)
+  public native void setLocalRank(int localRank);
+
+  // Returns the local rank of the process
+  // (See below for description of local rank)
+  public native int getLocalRank();
+}
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/ReductionFunctionFloat.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/ReductionFunctionFloat.java
new file mode 100644
index 00000000000..ea178dfe915
--- /dev/null
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/ReductionFunctionFloat.java
@@ -0,0 +1,50 @@
+// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
+
+package org.bytedeco.pytorch.gloo;
+
+import java.nio.*;
+import org.bytedeco.javacpp.*;
+import org.bytedeco.javacpp.annotation.*;
+
+import static org.bytedeco.javacpp.presets.javacpp.*;
+import static org.bytedeco.openblas.global.openblas_nolapack.*;
+import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
+import static org.bytedeco.pytorch.global.torch.*;
+
+import static org.bytedeco.pytorch.global.gloo.*;
+
+
+@Name("gloo::ReductionFunction") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class ReductionFunctionFloat extends Pointer {
+    static { Loader.load(); }
+    /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+    public ReductionFunctionFloat(Pointer p) { super(p); }
+
+  public static class Function extends FunctionPointer {
+      static { Loader.load(); }
+      /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+      public    Function(Pointer p) { super(p); }
+      protected Function() { allocate(); }
+      private native void allocate();
+      public native void call(FloatPointer arg0, @Const FloatPointer arg1, @Cast("size_t") long n);
+  }
+
+  
+  
+  
+  
+
+  public ReductionFunctionFloat(ReductionType type, Function fn) { super((Pointer)null); allocate(type, fn); }
+  private native void allocate(ReductionType type, Function fn);
+  public ReductionFunctionFloat(@Cast("gloo::ReductionType") int type, Function fn) { super((Pointer)null); allocate(type, fn); }
+  private native void allocate(@Cast("gloo::ReductionType") int type, Function fn);
+
+  public native ReductionType type();
+
+  public native void call(FloatPointer x, @Const FloatPointer y, @Cast("size_t") long n);
+  public native void call(FloatBuffer x, @Const FloatBuffer y, @Cast("size_t") long n);
+  public native void call(float[] x, @Const float[] y, @Cast("size_t") long n);
+}
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/ReductionFunctionInt.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/ReductionFunctionInt.java
new file mode 100644
index 00000000000..dd7e0a48f27
--- /dev/null
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/ReductionFunctionInt.java
@@ -0,0 +1,50 @@
+// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
+
+package org.bytedeco.pytorch.gloo;
+
+import java.nio.*;
+import org.bytedeco.javacpp.*;
+import org.bytedeco.javacpp.annotation.*;
+
+import static org.bytedeco.javacpp.presets.javacpp.*;
+import static org.bytedeco.openblas.global.openblas_nolapack.*;
+import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
+import static org.bytedeco.pytorch.global.torch.*;
+
+import static org.bytedeco.pytorch.global.gloo.*;
+
+
+@Name("gloo::ReductionFunction") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class ReductionFunctionInt extends Pointer {
+    static { Loader.load(); }
+    /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+    public ReductionFunctionInt(Pointer p) { super(p); }
+
+  public static class Function extends FunctionPointer {
+      static { Loader.load(); }
+      /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+      public    Function(Pointer p) { super(p); }
+      protected Function() { allocate(); }
+      private native void allocate();
+      public native void call(IntPointer arg0, @Const IntPointer arg1, @Cast("size_t") long n);
+  }
+
+  
+  
+  
+  
+
+  public ReductionFunctionInt(ReductionType type, Function fn) { super((Pointer)null); allocate(type, fn); }
+  private native void allocate(ReductionType type, Function fn);
+  public ReductionFunctionInt(@Cast("gloo::ReductionType") int type, Function fn) { super((Pointer)null); allocate(type, fn); }
+  private native void allocate(@Cast("gloo::ReductionType") int type, Function fn);
+
+  public native ReductionType type();
+
+  public native void call(IntPointer x, @Const IntPointer y, @Cast("size_t") long n);
+  public native void call(IntBuffer x, @Const IntBuffer y, @Cast("size_t") long n);
+  public native void call(int[] x, @Const int[] y, @Cast("size_t") long n);
+}
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Store.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Store.java
new file mode 100644
index 00000000000..65d3efe0234
--- /dev/null
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/Store.java
@@ -0,0 +1,53 @@
+// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
+
+package org.bytedeco.pytorch.gloo;
+
+import java.nio.*;
+import org.bytedeco.javacpp.*;
+import org.bytedeco.javacpp.annotation.*;
+
+import static org.bytedeco.javacpp.presets.javacpp.*;
+import static org.bytedeco.openblas.global.openblas_nolapack.*;
+import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
+import static org.bytedeco.pytorch.global.torch.*;
+
+import static org.bytedeco.pytorch.global.gloo.*;
+
+
+@Namespace("gloo::rendezvous") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class Store extends IStore {
+    static { Loader.load(); }
+    /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+    public Store(Pointer p) { super(p); }
+
+  @MemberGetter public static native @Const @ByRef Milliseconds kDefaultTimeout();
+
+  public native void set(@StdString BytePointer key, @Cast("const std::vector*") @ByRef ByteVector data);
+  public native void set(@StdString String key, @Cast("const std::vector*") @ByRef ByteVector data);
+
+  public native @ByVal @Cast("std::vector*") ByteVector get(@StdString BytePointer key);
+  public native @ByVal @Cast("std::vector*") ByteVector get(@StdString String key);
+
+  public native @Name("wait") void _wait(
+        @Const @ByRef StringVector keys);
+
+  public native @Name("wait") void _wait(
+        @Const @ByRef StringVector keys,
+        @Const @ByRef Milliseconds arg1);
+
+  public native @Cast("bool") boolean has_v2_support();
+
+  public native @Cast("std::vector*") @StdVector ByteVector multi_get(@Const @ByRef StringVector arg0);
+
+  public native void multi_set(@Const @ByRef StringVector arg0, @Cast("std::vector*") @StdVector ByteVector arg1);
+
+  public native void append(@StdString BytePointer key, @Cast("const std::vector*") @ByRef ByteVector arg1);
+  public native void append(@StdString String key, @Cast("const std::vector*") @ByRef ByteVector arg1);
+
+  public native @Cast("int64_t") long add(@StdString BytePointer key, @Cast("int64_t") long value);
+  public native @Cast("int64_t") long add(@StdString String key, @Cast("int64_t") long value);
+
+}
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/TransportContext.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/TransportContext.java
new file mode 100644
index 00000000000..391f08edf8a
--- /dev/null
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/TransportContext.java
@@ -0,0 +1,57 @@
+// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
+
+package org.bytedeco.pytorch.gloo;
+
+import java.nio.*;
+import org.bytedeco.javacpp.*;
+import org.bytedeco.javacpp.annotation.*;
+
+import static org.bytedeco.javacpp.presets.javacpp.*;
+import static org.bytedeco.openblas.global.openblas_nolapack.*;
+import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
+import static org.bytedeco.pytorch.global.torch.*;
+
+import static org.bytedeco.pytorch.global.gloo.*;
+
+
+// The context represents a set of pairs that belong to the same
+// group. It is roughly equivalent to the top level context class
+// with the exception that it captures transport specifics.
+//
+// While implementing the recv-from-any functionality we realized we
+// realized we needed some transport-specific state shared between all
+// pairs in a group, to arbitrate between multiple pairs attempting to
+// send to the same buffer.
+//
+@Name("gloo::transport::Context") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class TransportContext extends Pointer {
+    static { Loader.load(); }
+    /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+    public TransportContext(Pointer p) { super(p); }
+
+
+  @MemberGetter public native int rank();
+  @MemberGetter public native int size();
+
+  public native @UniquePtr Pair getPair(int rank);
+
+  public native @UniquePtr Pair createPair(int rank);
+
+  public native void createAndConnectAllPairs(@ByRef IStore store);
+
+  // Creates unbound buffer to be used with the ranks in this context.
+  // It is not bound to a specific rank, but still bound to this
+  // context. This is needed to support recv-from-any semantics, where
+  // the context is used as shared arbiter between pairs that are
+  // ready to send and buffers that are ready to receive.
+  public native @UniquePtr UnboundBuffer createUnboundBuffer(
+        Pointer ptr,
+        @Cast("size_t") long size);
+
+  public native void setTimeout(@ByVal Milliseconds timeout);
+
+  public native @ByVal Milliseconds getTimeout();
+}
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/UnboundBuffer.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/UnboundBuffer.java
new file mode 100644
index 00000000000..6365af0c146
--- /dev/null
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/UnboundBuffer.java
@@ -0,0 +1,130 @@
+// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
+
+package org.bytedeco.pytorch.gloo;
+
+import java.nio.*;
+import org.bytedeco.javacpp.*;
+import org.bytedeco.javacpp.annotation.*;
+
+import static org.bytedeco.javacpp.presets.javacpp.*;
+import static org.bytedeco.openblas.global.openblas_nolapack.*;
+import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
+import static org.bytedeco.pytorch.global.torch.*;
+
+import static org.bytedeco.pytorch.global.gloo.*;
+
+
+// The unbound buffer class represents a chunk of memory.
+// It can either be used as a source for send operations or a
+// destination for receive operations, or both. There should only be a
+// single pending operation against an unbound buffer at any given
+// time, or resulting behavior is undefined.
+//
+// It is called unbound to contrast with the bound buffers that have
+// been available since the inception of Gloo. It is unbound in that
+// it is not tied to a particular pair.
+//
+@Namespace("gloo::transport") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class UnboundBuffer extends Pointer {
+    static { Loader.load(); }
+    /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+    public UnboundBuffer(Pointer p) { super(p); }
+
+
+  @MemberGetter public native Pointer ptr();
+  @MemberGetter public native @Cast("const size_t") long size();
+
+  // If specified, the source of this recv is stored in the rank pointer.
+  // Returns true if it completed, false if it was aborted.
+  public native @Cast("bool") boolean waitRecv(IntPointer rank, @ByVal Milliseconds timeout);
+  public native @Cast("bool") boolean waitRecv(IntBuffer rank, @ByVal Milliseconds timeout);
+  public native @Cast("bool") boolean waitRecv(int[] rank, @ByVal Milliseconds timeout);
+
+  // If specified, the destination of this send is stored in the rank pointer.
+  // Returns true if it completed, false if it was aborted.
+  public native @Cast("bool") boolean waitSend(IntPointer rank, @ByVal Milliseconds timeout);
+  public native @Cast("bool") boolean waitSend(IntBuffer rank, @ByVal Milliseconds timeout);
+  public native @Cast("bool") boolean waitSend(int[] rank, @ByVal Milliseconds timeout);
+
+  // Aborts a pending waitRecv call.
+  public native void abortWaitRecv();
+
+  // Aborts a pending waitSend call.
+  public native void abortWaitSend();
+
+  // Default overload.
+  public native @Cast("bool") boolean waitRecv();
+
+  // Default overload.
+  public native @Cast("bool") boolean waitSend();
+
+  // Rank overload.
+  public native @Cast("bool") boolean waitRecv(IntPointer rank);
+  public native @Cast("bool") boolean waitRecv(IntBuffer rank);
+  public native @Cast("bool") boolean waitRecv(int[] rank);
+
+  // Rank overload.
+  public native @Cast("bool") boolean waitSend(IntPointer rank);
+  public native @Cast("bool") boolean waitSend(IntBuffer rank);
+  public native @Cast("bool") boolean waitSend(int[] rank);
+
+  // Timeout overload.
+  public native @Cast("bool") boolean waitRecv(@ByVal Milliseconds timeout);
+
+  // Timeout overload.
+  public native @Cast("bool") boolean waitSend(@ByVal Milliseconds timeout);
+
+  // Deadline overload.
+
+  // Deadline overload.
+
+  // If the byte count argument is not specified, it will default the
+  // number of bytes to be equal to the number of bytes remaining in
+  // the buffer w.r.t. the offset.
+
+  public native void send(
+        int dstRank,
+        @Cast("uint64_t") long slot,
+        @Cast("size_t") long offset/*=0*/,
+        @Cast("size_t") long nbytes/*=gloo::transport::UnboundBuffer::kUnspecifiedByteCount*/);
+  public native void send(
+        int dstRank,
+        @Cast("uint64_t") long slot);
+
+  public native void recv(
+        int srcRank,
+        @Cast("uint64_t") long slot,
+        @Cast("size_t") long offset/*=0*/,
+        @Cast("size_t") long nbytes/*=gloo::transport::UnboundBuffer::kUnspecifiedByteCount*/);
+  public native void recv(
+        int srcRank,
+        @Cast("uint64_t") long slot);
+
+  public native void recv(
+        @StdVector IntPointer srcRanks,
+        @Cast("uint64_t") long slot,
+        @Cast("size_t") long offset/*=0*/,
+        @Cast("size_t") long nbytes/*=gloo::transport::UnboundBuffer::kUnspecifiedByteCount*/);
+  public native void recv(
+        @StdVector IntPointer srcRanks,
+        @Cast("uint64_t") long slot);
+  public native void recv(
+        @StdVector IntBuffer srcRanks,
+        @Cast("uint64_t") long slot,
+        @Cast("size_t") long offset/*=0*/,
+        @Cast("size_t") long nbytes/*=gloo::transport::UnboundBuffer::kUnspecifiedByteCount*/);
+  public native void recv(
+        @StdVector IntBuffer srcRanks,
+        @Cast("uint64_t") long slot);
+  public native void recv(
+        @StdVector int[] srcRanks,
+        @Cast("uint64_t") long slot,
+        @Cast("size_t") long offset/*=0*/,
+        @Cast("size_t") long nbytes/*=gloo::transport::UnboundBuffer::kUnspecifiedByteCount*/);
+  public native void recv(
+        @StdVector int[] srcRanks,
+        @Cast("uint64_t") long slot);
+}
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/float16.java b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/float16.java
new file mode 100644
index 00000000000..b0aa631d3fe
--- /dev/null
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/gloo/float16.java
@@ -0,0 +1,66 @@
+// Targeted by JavaCPP version 1.5.11-SNAPSHOT: DO NOT EDIT THIS FILE
+
+package org.bytedeco.pytorch.gloo;
+
+import java.nio.*;
+import org.bytedeco.javacpp.*;
+import org.bytedeco.javacpp.annotation.*;
+
+import static org.bytedeco.javacpp.presets.javacpp.*;
+import static org.bytedeco.openblas.global.openblas_nolapack.*;
+import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
+import org.bytedeco.pytorch.*;
+import static org.bytedeco.pytorch.global.torch.*;
+
+import static org.bytedeco.pytorch.global.gloo.*;
+
+
+@Namespace("gloo") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.gloo.class)
+public class float16 extends Pointer {
+    static { Loader.load(); }
+    /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
+    public float16(Pointer p) { super(p); }
+
+  public native @Cast("uint16_t") short x(); public native float16 x(short setter);
+
+  public float16() { super((Pointer)null); allocate(); }
+  private native void allocate();
+
+  public float16(@Const @ByRef float16 arg0) { super((Pointer)null); allocate(arg0); }
+  private native void allocate(@Const @ByRef float16 arg0);
+
+  public float16(int val) { super((Pointer)null); allocate(val); }
+  private native void allocate(int val);
+
+  public float16(@Cast("unsigned long") long val) { super((Pointer)null); allocate(val); }
+  private native void allocate(@Cast("unsigned long") long val);
+
+  public float16(double val) { super((Pointer)null); allocate(val); }
+  private native void allocate(double val);
+
+  public native @ByRef @Name("operator =") float16 put(int rhs);
+
+  public native @ByRef @Name("operator =") float16 put(@Const @ByRef float16 rhs);
+
+  public native @Cast("bool") @Name("operator ==") boolean equals(@Const @ByRef float16 rhs);
+
+  public native @Cast("bool") @Name("operator !=") boolean notEquals(@Const @ByRef float16 rhs);
+
+  public native @Cast("bool") @Name("operator ==") boolean equals(int rhs);
+
+  public native @Cast("bool") @Name("operator ==") boolean equals(@Cast("const unsigned long") long rhs);
+
+  public native @Cast("bool") @Name("operator ==") boolean equals(double rhs);
+// #ifdef __CUDA_ARCH__
+// #endif // __CUDA_ARCH
+
+  public native @ByRef @Name("operator +=") float16 addPut(@Const @ByRef float16 rhs);
+
+  public native @ByRef @Name("operator -=") float16 subtractPut(@Const @ByRef float16 rhs);
+
+  public native @ByRef @Name("operator *=") float16 multiplyPut(@Const @ByRef float16 rhs);
+
+  public native @ByRef @Name("operator /=") float16 dividePut(@Const @ByRef float16 rhs);
+}
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/graph_node_list.java b/pytorch/src/gen/java/org/bytedeco/pytorch/graph_node_list.java
index 2b1a571e3aa..cf962b471c6 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/graph_node_list.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/graph_node_list.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/graph_node_list_iterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/graph_node_list_iterator.java
index 2431db48807..7cc2ad90401 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/graph_node_list_iterator.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/graph_node_list_iterator.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kArea.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kArea.java
index 6bfc0a2a849..dbf84a62a0d 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kArea.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kArea.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kBatchMean.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kBatchMean.java
index 67225dd389e..271990706b8 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kBatchMean.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kBatchMean.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kBicubic.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kBicubic.java
index eca2f43d968..818686291f4 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kBicubic.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kBicubic.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kBilinear.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kBilinear.java
index 5e3b7c91a77..e3fdf3183a9 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kBilinear.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kBilinear.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kBorder.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kBorder.java
index 87773c1d919..0cb481a3614 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kBorder.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kBorder.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kCircular.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kCircular.java
index f0d077a959a..e392b5d1754 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kCircular.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kCircular.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kConstant.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kConstant.java
index 91eb3597bab..7533e57232a 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kConstant.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kConstant.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kConv1D.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kConv1D.java
index 268d847555a..a01438d1538 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kConv1D.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kConv1D.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kConv2D.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kConv2D.java
index 77c7837a3fe..12b64451003 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kConv2D.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kConv2D.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kConv3D.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kConv3D.java
index 96c9ee7c656..ec628207179 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kConv3D.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kConv3D.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose1D.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose1D.java
index 2de70e78eaa..c2f0175f430 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose1D.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose1D.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose2D.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose2D.java
index 766614b1980..04d073690ed 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose2D.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose2D.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose3D.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose3D.java
index d2668a7332b..a4816dbc80a 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose3D.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kConvTranspose3D.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kFanIn.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kFanIn.java
index 6f11f1020c0..c8be965dd06 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kFanIn.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kFanIn.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kFanOut.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kFanOut.java
index c8eed7e614a..c1a3d1e4554 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kFanOut.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kFanOut.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kGELU.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kGELU.java
index 592d634c9c6..b0591c7ba4d 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kGELU.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kGELU.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kGRU.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kGRU.java
index 136a4fd9acb..c57d03a75a1 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kGRU.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kGRU.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kLSTM.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kLSTM.java
index 1f33ef09ee3..13510aca8c8 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kLSTM.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kLSTM.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kLeakyReLU.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kLeakyReLU.java
index 0eb881b01a7..2f05fd224ef 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kLeakyReLU.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kLeakyReLU.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kLinear.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kLinear.java
index 3556d45734d..d24692dbdc8 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kLinear.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kLinear.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kMax.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kMax.java
index 0d0d7b310cd..0e6cf53c07e 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kMax.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kMax.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kMean.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kMean.java
index 2ee10f4d68d..73f26e04b85 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kMean.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kMean.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kMish.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kMish.java
index 60c3f22018b..b2a4ff8a5e8 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kMish.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kMish.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kNearest.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kNearest.java
index 573736b4309..cd5c42d4095 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kNearest.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kNearest.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kNearestExact.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kNearestExact.java
index cfe4671eaa9..e71d5751dd3 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kNearestExact.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kNearestExact.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kNone.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kNone.java
index cb51fb7e974..a76b0a15322 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kNone.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kNone.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kRNN_RELU.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kRNN_RELU.java
index dd86b8b7fc2..b6e0a52f0db 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kRNN_RELU.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kRNN_RELU.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kRNN_TANH.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kRNN_TANH.java
index fbb6d2c3923..0cf02afc532 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kRNN_TANH.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kRNN_TANH.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kReLU.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kReLU.java
index ce5741e47c9..614e35f36ef 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kReLU.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kReLU.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kReflect.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kReflect.java
index 9cad7f5a19c..5b7b80017b3 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kReflect.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kReflect.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kReflection.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kReflection.java
index 5c788f8ddf9..3ed0fbf7b3f 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kReflection.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kReflection.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kReplicate.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kReplicate.java
index efaffc9e3c2..4d48ead30e5 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kReplicate.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kReplicate.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kSame.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kSame.java
index 188534ff2e6..0bf4bf62419 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kSame.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kSame.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kSiLU.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kSiLU.java
index 8ca84a9e5a9..a0c30719145 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kSiLU.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kSiLU.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kSigmoid.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kSigmoid.java
index 0689e00947c..07562cb180c 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kSigmoid.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kSigmoid.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kSum.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kSum.java
index 3f98d19409e..b488f3ae4f1 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kSum.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kSum.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kTanh.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kTanh.java
index 89d9fce5bef..e8e2ac268a1 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kTanh.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kTanh.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kTrilinear.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kTrilinear.java
index 697baf3540c..8a6e94ea6bf 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kTrilinear.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kTrilinear.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kValid.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kValid.java
index 84e49c697bd..4106c99ae29 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kValid.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kValid.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/kZeros.java b/pytorch/src/gen/java/org/bytedeco/pytorch/kZeros.java
index 2a8bf2c2959..6939303b7fc 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/kZeros.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/kZeros.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/module_iterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/module_iterator.java
index 6ff234c2286..01cce1c627b 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/module_iterator.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/module_iterator.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
  // namespace detail
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/module_list.java b/pytorch/src/gen/java/org/bytedeco/pytorch/module_list.java
index f3e02f7ac48..c505298e526 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/module_list.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/module_list.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/mt19937_data_pod.java b/pytorch/src/gen/java/org/bytedeco/pytorch/mt19937_data_pod.java
index 253ad1624a7..e709a868b58 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/mt19937_data_pod.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/mt19937_data_pod.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/mt19937_engine.java b/pytorch/src/gen/java/org/bytedeco/pytorch/mt19937_engine.java
index fe74487df95..8ef9b902805 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/mt19937_engine.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/mt19937_engine.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/named_attribute_iterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/named_attribute_iterator.java
index 2d32c6b21fd..bd5786e9c40 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/named_attribute_iterator.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/named_attribute_iterator.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/named_attribute_list.java b/pytorch/src/gen/java/org/bytedeco/pytorch/named_attribute_list.java
index 37aaaf3b6ad..af42c64d5e0 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/named_attribute_list.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/named_attribute_list.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/named_buffer_iterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/named_buffer_iterator.java
index 2f438ae0394..235fffab000 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/named_buffer_iterator.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/named_buffer_iterator.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/named_buffer_list.java b/pytorch/src/gen/java/org/bytedeco/pytorch/named_buffer_list.java
index f16a884a90e..a4e267198e6 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/named_buffer_list.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/named_buffer_list.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/named_module_iterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/named_module_iterator.java
index 7ab6a91fa3e..6ca6829b50d 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/named_module_iterator.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/named_module_iterator.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/named_module_list.java b/pytorch/src/gen/java/org/bytedeco/pytorch/named_module_list.java
index 71d8fc5f54e..74db4c4c287 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/named_module_list.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/named_module_list.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/named_parameter_iterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/named_parameter_iterator.java
index e5b5ece1427..b50565e8e7d 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/named_parameter_iterator.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/named_parameter_iterator.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/named_parameter_list.java b/pytorch/src/gen/java/org/bytedeco/pytorch/named_parameter_list.java
index 90a3749f13b..a8c2875924d 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/named_parameter_list.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/named_parameter_list.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/parameter_iterator.java b/pytorch/src/gen/java/org/bytedeco/pytorch/parameter_iterator.java
index 2f185a80acd..e3fd37d7988 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/parameter_iterator.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/parameter_iterator.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/parameter_list.java b/pytorch/src/gen/java/org/bytedeco/pytorch/parameter_list.java
index dbcf0933d30..adc34a79be9 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/parameter_list.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/parameter_list.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/pretty_tree.java b/pytorch/src/gen/java/org/bytedeco/pytorch/pretty_tree.java
index f727813364e..f6f17761125 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/pretty_tree.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/pretty_tree.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
@@ -25,13 +26,13 @@ public class pretty_tree extends Pointer {
     /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
     public pretty_tree(Pointer p) { super(p); }
 
-  public pretty_tree(@Const @ByRef TreeRef tree, @Cast("size_t") long col/*=40*/) { super((Pointer)null); allocate(tree, col); }
-  private native void allocate(@Const @ByRef TreeRef tree, @Cast("size_t") long col/*=40*/);
-  public pretty_tree(@Const @ByRef TreeRef tree) { super((Pointer)null); allocate(tree); }
-  private native void allocate(@Const @ByRef TreeRef tree);
-  @MemberGetter public native @Const @ByRef TreeRef tree();
+  public pretty_tree(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree, @Cast("size_t") long col/*=40*/) { super((Pointer)null); allocate(tree, col); }
+  private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree, @Cast("size_t") long col/*=40*/);
+  public pretty_tree(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree) { super((Pointer)null); allocate(tree); }
+  private native void allocate(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree);
+  @MemberGetter public native @IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree tree();
   public native @Cast("size_t") long col(); public native pretty_tree col(long setter);
-  public native @ByRef TreeRefStringMap flat_strings(); public native pretty_tree flat_strings(TreeRefStringMap setter);
-  public native @StdString BytePointer get_flat(@Const @ByRef TreeRef t);
-  public native void print(@Cast("std::ostream*") @ByRef Pointer out, @Const @ByRef TreeRef t, int indent);
+  public native @ByRef TreeStringMap flat_strings(); public native pretty_tree flat_strings(TreeStringMap setter);
+  public native @StdString BytePointer get_flat(@IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree t);
+  public native void print(@Cast("std::ostream*") @ByRef Pointer out, @IntrusivePtr("torch::jit::Tree") @Cast({"", "c10::intrusive_ptr&"}) Tree t, int indent);
 }
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/qint32.java b/pytorch/src/gen/java/org/bytedeco/pytorch/qint32.java
index bdcc5e401e8..8fa3bb8d10e 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/qint32.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/qint32.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/qint8.java b/pytorch/src/gen/java/org/bytedeco/pytorch/qint8.java
index 937a3315412..0e75a853d4b 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/qint8.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/qint8.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/quint2x4.java b/pytorch/src/gen/java/org/bytedeco/pytorch/quint2x4.java
index 3e5e27db6f0..14459cd8940 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/quint2x4.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/quint2x4.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/quint4x2.java b/pytorch/src/gen/java/org/bytedeco/pytorch/quint4x2.java
index 4913083afba..8d9017636d7 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/quint4x2.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/quint4x2.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/quint8.java b/pytorch/src/gen/java/org/bytedeco/pytorch/quint8.java
index 24540eb2098..c9e6d5d8bb9 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/quint8.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/quint8.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/gen/java/org/bytedeco/pytorch/type_index.java b/pytorch/src/gen/java/org/bytedeco/pytorch/type_index.java
index 091ea09b4ed..cdc5039bac1 100644
--- a/pytorch/src/gen/java/org/bytedeco/pytorch/type_index.java
+++ b/pytorch/src/gen/java/org/bytedeco/pytorch/type_index.java
@@ -4,7 +4,6 @@
 
 import org.bytedeco.pytorch.Allocator;
 import org.bytedeco.pytorch.Function;
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.pytorch.Module;
 import org.bytedeco.javacpp.annotation.Cast;
 import java.nio.*;
@@ -14,6 +13,8 @@
 import static org.bytedeco.javacpp.presets.javacpp.*;
 import static org.bytedeco.openblas.global.openblas_nolapack.*;
 import static org.bytedeco.openblas.global.openblas.*;
+import org.bytedeco.javacpp.chrono.*;
+import static org.bytedeco.javacpp.global.chrono.*;
 
 import static org.bytedeco.pytorch.global.torch.*;
 
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/ArchiveWriter.java b/pytorch/src/main/java/org/bytedeco/pytorch/ArchiveWriter.java
similarity index 95%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/ArchiveWriter.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/ArchiveWriter.java
index e2e7e1f2d06..b183208852b 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/ArchiveWriter.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/ArchiveWriter.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/BackendMetaPtr.java b/pytorch/src/main/java/org/bytedeco/pytorch/BackendMetaPtr.java
similarity index 86%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/BackendMetaPtr.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/BackendMetaPtr.java
index cb56ce261d5..7f3f4991444 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/BackendMetaPtr.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/BackendMetaPtr.java
@@ -1,11 +1,9 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.*;
-import org.bytedeco.pytorch.StringBoolMap;
-import org.bytedeco.pytorch.Tensor;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class BackendMetaPtr extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/DDPLogger.java b/pytorch/src/main/java/org/bytedeco/pytorch/DDPLogger.java
similarity index 89%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/DDPLogger.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/DDPLogger.java
index d48bbaad51e..147f253ac98 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/DDPLogger.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/DDPLogger.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
@@ -6,7 +6,6 @@
 import org.bytedeco.javacpp.annotation.ByRef;
 import org.bytedeco.javacpp.annotation.Const;
 import org.bytedeco.javacpp.annotation.Properties;
-import org.bytedeco.pytorch.DDPLoggingData;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class DDPLogger extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/DistanceFunction.java b/pytorch/src/main/java/org/bytedeco/pytorch/DistanceFunction.java
similarity index 90%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/DistanceFunction.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/DistanceFunction.java
index 8e914a19e6e..f75967aa84e 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/DistanceFunction.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/DistanceFunction.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
@@ -6,7 +6,6 @@
 import org.bytedeco.javacpp.annotation.ByRef;
 import org.bytedeco.javacpp.annotation.ByVal;
 import org.bytedeco.javacpp.annotation.Properties;
-import org.bytedeco.pytorch.Tensor;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class DistanceFunction extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/ExampleStack.java b/pytorch/src/main/java/org/bytedeco/pytorch/ExampleStack.java
index be2c90738c6..6c22d708968 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/ExampleStack.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/ExampleStack.java
@@ -14,7 +14,7 @@
 @Name("torch::data::transforms::Stack >")  @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class ExampleStack extends ExampleCollation {
     /** Empty constructor. Calls {@code super((Pointer)null)}. */
-    public ExampleStack() { super((Pointer)null); allocate(); }
+    public ExampleStack() { super(null); allocate(); }
     private native void allocate();
     /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
     public ExampleStack(Pointer p) { super(p); }
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/Func.java b/pytorch/src/main/java/org/bytedeco/pytorch/Func.java
similarity index 93%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/Func.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/Func.java
index 68783f47625..bdeef6b1c52 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/Func.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/Func.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/GradCallback.java b/pytorch/src/main/java/org/bytedeco/pytorch/GradCallback.java
new file mode 100644
index 00000000000..d39bb147d7b
--- /dev/null
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/GradCallback.java
@@ -0,0 +1,31 @@
+package org.bytedeco.pytorch;
+
+import org.bytedeco.javacpp.FunctionPointer;
+import org.bytedeco.javacpp.Loader;
+import org.bytedeco.javacpp.Pointer;
+import org.bytedeco.javacpp.annotation.ByRef;
+import org.bytedeco.javacpp.annotation.ByVal;
+import org.bytedeco.javacpp.annotation.Properties;
+
+@Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
+public class GradCallback extends FunctionPointer {
+    static {
+        Loader.load();
+    }
+
+    /**
+     * Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}.
+     */
+    public GradCallback(Pointer p) {
+        super(p);
+    }
+
+    protected GradCallback() {
+        allocate();
+    }
+
+    private native void allocate();
+
+    //  std::function
+    public native @ByVal boolean call(@ByRef Tensor a);
+}
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/GraphFunctionCreator.java b/pytorch/src/main/java/org/bytedeco/pytorch/GraphFunctionCreator.java
similarity index 89%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/GraphFunctionCreator.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/GraphFunctionCreator.java
index 093c5e42825..3d22f4fd1c5 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/GraphFunctionCreator.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/GraphFunctionCreator.java
@@ -1,11 +1,10 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.ByRef;
 import org.bytedeco.javacpp.annotation.Properties;
-import org.bytedeco.pytorch.GraphFunction;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class GraphFunctionCreator extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/IValueSupplier.java b/pytorch/src/main/java/org/bytedeco/pytorch/IValueSupplier.java
similarity index 89%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/IValueSupplier.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/IValueSupplier.java
index 4e41a712d10..4029be2c588 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/IValueSupplier.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/IValueSupplier.java
@@ -1,11 +1,10 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.ByPtr;
 import org.bytedeco.javacpp.annotation.Properties;
-import org.bytedeco.pytorch.IValue;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class IValueSupplier extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/IValueVectorConsumer.java b/pytorch/src/main/java/org/bytedeco/pytorch/IValueVectorConsumer.java
similarity index 89%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/IValueVectorConsumer.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/IValueVectorConsumer.java
index 05855c1de81..10e5169540f 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/IValueVectorConsumer.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/IValueVectorConsumer.java
@@ -1,11 +1,10 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.ByRef;
 import org.bytedeco.javacpp.annotation.Properties;
-import org.bytedeco.pytorch.IValueVector;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class IValueVectorConsumer extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/IntrusivePtr.java b/pytorch/src/main/java/org/bytedeco/pytorch/IntrusivePtr.java
new file mode 100644
index 00000000000..92d19f45c0b
--- /dev/null
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/IntrusivePtr.java
@@ -0,0 +1,12 @@
+package org.bytedeco.pytorch;
+
+import org.bytedeco.javacpp.annotation.Adapter;
+
+import java.lang.annotation.*;
+
+@Documented @Retention(RetentionPolicy.RUNTIME)
+@Target({ElementType.METHOD, ElementType.PARAMETER})
+@Adapter("IntrusivePtrAdapter")
+public @interface IntrusivePtr {
+    String value() default "";
+}
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/JitModuleApplyFunction.java b/pytorch/src/main/java/org/bytedeco/pytorch/JitModuleApplyFunction.java
similarity index 89%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/JitModuleApplyFunction.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/JitModuleApplyFunction.java
index c7ed499ea45..2ccd9f2d616 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/JitModuleApplyFunction.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/JitModuleApplyFunction.java
@@ -1,11 +1,10 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.ByRef;
 import org.bytedeco.javacpp.annotation.Properties;
-import org.bytedeco.pytorch.JitModule;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class JitModuleApplyFunction extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/LossClosure.java b/pytorch/src/main/java/org/bytedeco/pytorch/LossClosure.java
similarity index 88%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/LossClosure.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/LossClosure.java
index 32877c79326..3dc3d86e724 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/LossClosure.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/LossClosure.java
@@ -1,10 +1,9 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.*;
-import org.bytedeco.pytorch.Tensor;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class LossClosure extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/MemCopyFunction.java b/pytorch/src/main/java/org/bytedeco/pytorch/MemCopyFunction.java
similarity index 95%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/MemCopyFunction.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/MemCopyFunction.java
index ea5e809b236..f355e40b3d1 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/MemCopyFunction.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/MemCopyFunction.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/MetadataLogger.java b/pytorch/src/main/java/org/bytedeco/pytorch/MetadataLogger.java
similarity index 88%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/MetadataLogger.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/MetadataLogger.java
index 36a5026481b..61580572908 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/MetadataLogger.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/MetadataLogger.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.BytePointer;
 import org.bytedeco.javacpp.FunctionPointer;
@@ -8,8 +8,6 @@
 import org.bytedeco.javacpp.annotation.Const;
 import org.bytedeco.javacpp.annotation.Properties;
 import org.bytedeco.javacpp.annotation.StdString;
-import org.bytedeco.pytorch.DDPLoggingData;
-import org.bytedeco.pytorch.StringStringMap;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class MetadataLogger extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/ModuleApplyFunction.java b/pytorch/src/main/java/org/bytedeco/pytorch/ModuleApplyFunction.java
similarity index 90%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/ModuleApplyFunction.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/ModuleApplyFunction.java
index c6dca07eb3c..bf8ff29687f 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/ModuleApplyFunction.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/ModuleApplyFunction.java
@@ -1,11 +1,10 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.ByRef;
 import org.bytedeco.javacpp.annotation.Properties;
-import org.bytedeco.pytorch.Module;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class ModuleApplyFunction extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/NamedModuleApplyFunction.java b/pytorch/src/main/java/org/bytedeco/pytorch/NamedModuleApplyFunction.java
similarity index 90%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/NamedModuleApplyFunction.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/NamedModuleApplyFunction.java
index 63558e95a89..4e88bd90e37 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/NamedModuleApplyFunction.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/NamedModuleApplyFunction.java
@@ -1,11 +1,10 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.BytePointer;
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.*;
-import org.bytedeco.pytorch.Module;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class NamedModuleApplyFunction extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/NamedSharedModuleApplyFunction.java b/pytorch/src/main/java/org/bytedeco/pytorch/NamedSharedModuleApplyFunction.java
similarity index 91%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/NamedSharedModuleApplyFunction.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/NamedSharedModuleApplyFunction.java
index 60d59d88319..f7ca355d7ab 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/NamedSharedModuleApplyFunction.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/NamedSharedModuleApplyFunction.java
@@ -1,11 +1,10 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.BytePointer;
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.*;
-import org.bytedeco.pytorch.Module;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class NamedSharedModuleApplyFunction extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/ObjLoader.java b/pytorch/src/main/java/org/bytedeco/pytorch/ObjLoader.java
similarity index 68%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/ObjLoader.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/ObjLoader.java
index d6b669218ce..b719ac22d12 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/ObjLoader.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/ObjLoader.java
@@ -1,14 +1,10 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.*;
 
-import org.bytedeco.pytorch.ObjPtr;
-import org.bytedeco.pytorch.StrongTypePtr;
-import org.bytedeco.pytorch.IValue;
-
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class ObjLoader extends FunctionPointer {
     static {
@@ -29,5 +25,6 @@ protected ObjLoader() {
     private native void allocate();
 
     // std::function(const at::StrongTypePtr&, IValue)>
-    public native @ByVal ObjPtr call(@Const @ByRef StrongTypePtr stp, @ByVal IValue iv);
+    // Without @Cast, the generated JavaCPP_org_bytedeco_pytorch_functions_ObjLoader::ptr would return an ivalue::Object
+    public native @ByVal @Cast({"", "c10::intrusive_ptr"}) @IntrusivePtr Obj call(@Const @ByRef StrongTypePtr stp, @ByVal IValue iv);
 }
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/OperationCreator.java b/pytorch/src/main/java/org/bytedeco/pytorch/OperationCreator.java
similarity index 86%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/OperationCreator.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/OperationCreator.java
index 8e627b27ca6..9bdea096b40 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/OperationCreator.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/OperationCreator.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
@@ -6,8 +6,6 @@
 import org.bytedeco.javacpp.annotation.ByVal;
 import org.bytedeco.javacpp.annotation.Properties;
 import org.bytedeco.javacpp.annotation.Const;
-import org.bytedeco.pytorch.Operation;
-import org.bytedeco.pytorch.JitNode;
 
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PickleReader.java b/pytorch/src/main/java/org/bytedeco/pytorch/PickleReader.java
similarity index 95%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/PickleReader.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/PickleReader.java
index 2906b6bc120..6a463178f22 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PickleReader.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/PickleReader.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.BytePointer;
 import org.bytedeco.javacpp.FunctionPointer;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PickleWriter.java b/pytorch/src/main/java/org/bytedeco/pytorch/PickleWriter.java
similarity index 89%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/PickleWriter.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/PickleWriter.java
index 016ef57108b..6a1ab3a4281 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PickleWriter.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/PickleWriter.java
@@ -1,11 +1,10 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.BytePointer;
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.Cast;
-import org.bytedeco.javacpp.annotation.Const;
 import org.bytedeco.javacpp.annotation.Properties;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PlacementConsumer.java b/pytorch/src/main/java/org/bytedeco/pytorch/PlacementConsumer.java
similarity index 94%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/PlacementConsumer.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/PlacementConsumer.java
index 769b0d1c618..32ad8a67d5f 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PlacementConsumer.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/PlacementConsumer.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PlacementCopier.java b/pytorch/src/main/java/org/bytedeco/pytorch/PlacementCopier.java
similarity index 95%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/PlacementCopier.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/PlacementCopier.java
index f6a2500543c..e323cd2ce8b 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PlacementCopier.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/PlacementCopier.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PointerConsumer.java b/pytorch/src/main/java/org/bytedeco/pytorch/PointerConsumer.java
similarity index 94%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/PointerConsumer.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/PointerConsumer.java
index 6dccbb19ad2..41ca3d4830a 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PointerConsumer.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/PointerConsumer.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PointerSupplier.java b/pytorch/src/main/java/org/bytedeco/pytorch/PointerSupplier.java
similarity index 93%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/PointerSupplier.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/PointerSupplier.java
index 86b40507c38..b05c245267f 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/PointerSupplier.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/PointerSupplier.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/Reader.java b/pytorch/src/main/java/org/bytedeco/pytorch/Reader.java
similarity index 94%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/Reader.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/Reader.java
index 6017ad60e1e..d6494b6dd1e 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/Reader.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/Reader.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/SharedModuleApplyFunction.java b/pytorch/src/main/java/org/bytedeco/pytorch/SharedModuleApplyFunction.java
similarity index 91%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/SharedModuleApplyFunction.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/SharedModuleApplyFunction.java
index 2408d082311..1665612e081 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/SharedModuleApplyFunction.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/SharedModuleApplyFunction.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
@@ -7,7 +7,6 @@
 import org.bytedeco.javacpp.annotation.Cast;
 import org.bytedeco.javacpp.annotation.Properties;
 import org.bytedeco.javacpp.annotation.SharedPtr;
-import org.bytedeco.pytorch.Module;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class SharedModuleApplyFunction extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/SizeTSupplier.java b/pytorch/src/main/java/org/bytedeco/pytorch/SizeTSupplier.java
similarity index 94%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/SizeTSupplier.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/SizeTSupplier.java
index 7bd857fc056..d4ec25e7b2a 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/SizeTSupplier.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/SizeTSupplier.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/StackTraceFetcher.java b/pytorch/src/main/java/org/bytedeco/pytorch/StackTraceFetcher.java
new file mode 100644
index 00000000000..8c64d804f83
--- /dev/null
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/StackTraceFetcher.java
@@ -0,0 +1,31 @@
+package org.bytedeco.pytorch;
+
+import org.bytedeco.javacpp.FunctionPointer;
+import org.bytedeco.javacpp.Loader;
+import org.bytedeco.javacpp.Pointer;
+import org.bytedeco.javacpp.annotation.Cast;
+import org.bytedeco.javacpp.annotation.Properties;
+import org.bytedeco.javacpp.annotation.SharedPtr;
+
+@Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
+public class StackTraceFetcher extends FunctionPointer {
+    static {
+        Loader.load();
+    }
+
+    /**
+     * Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}.
+     */
+    public StackTraceFetcher(Pointer p) {
+        super(p);
+    }
+
+    protected StackTraceFetcher() {
+        allocate();
+    }
+
+    private native void allocate();
+
+    // std::function >()>
+    public native @Cast({"", "std::shared_ptr>"}) @SharedPtr Backtrace call();
+}
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/StringConsumer.java b/pytorch/src/main/java/org/bytedeco/pytorch/StringConsumer.java
similarity index 95%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/StringConsumer.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/StringConsumer.java
index 2a04e6cc844..c892a355986 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/StringConsumer.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/StringConsumer.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.BytePointer;
 import org.bytedeco.javacpp.FunctionPointer;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/StringMapper.java b/pytorch/src/main/java/org/bytedeco/pytorch/StringMapper.java
new file mode 100644
index 00000000000..418b43b29f9
--- /dev/null
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/StringMapper.java
@@ -0,0 +1,33 @@
+package org.bytedeco.pytorch;
+
+import org.bytedeco.javacpp.BytePointer;
+import org.bytedeco.javacpp.FunctionPointer;
+import org.bytedeco.javacpp.Loader;
+import org.bytedeco.javacpp.Pointer;
+import org.bytedeco.javacpp.annotation.Cast;
+import org.bytedeco.javacpp.annotation.Const;
+import org.bytedeco.javacpp.annotation.Properties;
+import org.bytedeco.javacpp.annotation.StdString;
+
+@Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
+public class StringMapper extends FunctionPointer {
+    static {
+        Loader.load();
+    }
+
+    /**
+     * Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}.
+     */
+    public StringMapper(Pointer p) {
+        super(p);
+    }
+
+    protected StringMapper() {
+        allocate();
+    }
+
+    private native void allocate();
+
+    // std::function
+    public native  @StdString @Cast({"", "char*"}) BytePointer call(@Const @StdString BytePointer s);
+}
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/StringSupplier.java b/pytorch/src/main/java/org/bytedeco/pytorch/StringSupplier.java
similarity index 95%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/StringSupplier.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/StringSupplier.java
index b61b6fd61b2..72a285f7e14 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/StringSupplier.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/StringSupplier.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.BytePointer;
 import org.bytedeco.javacpp.FunctionPointer;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/TensorExampleStack.java b/pytorch/src/main/java/org/bytedeco/pytorch/TensorExampleStack.java
index e3298c1710c..5e4fe13d58d 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/TensorExampleStack.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TensorExampleStack.java
@@ -10,7 +10,7 @@
 @Name("torch::data::transforms::Stack >")  @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class TensorExampleStack extends TensorExampleCollation {
     /** Empty constructor. Calls {@code super((Pointer)null)}. */
-    public TensorExampleStack() { super((Pointer)null); allocate(); }
+    public TensorExampleStack() { super(null); allocate(); }
     private native void allocate();
     /** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
     public TensorExampleStack(Pointer p) { super(p); }
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorIdGetter.java b/pytorch/src/main/java/org/bytedeco/pytorch/TensorIdGetter.java
similarity index 92%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorIdGetter.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/TensorIdGetter.java
index dd5dc371a3a..5813e490252 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorIdGetter.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TensorIdGetter.java
@@ -1,11 +1,10 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.BytePointer;
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.*;
-import org.bytedeco.pytorch.Tensor;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class TensorIdGetter extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorMapper.java b/pytorch/src/main/java/org/bytedeco/pytorch/TensorMapper.java
similarity index 91%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorMapper.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/TensorMapper.java
index 6573817b45a..037146cf1bc 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorMapper.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TensorMapper.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
@@ -7,7 +7,6 @@
 import org.bytedeco.javacpp.annotation.ByVal;
 import org.bytedeco.javacpp.annotation.Const;
 import org.bytedeco.javacpp.annotation.Properties;
-import org.bytedeco.pytorch.Tensor;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class TensorMapper extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorTensorHook.java b/pytorch/src/main/java/org/bytedeco/pytorch/TensorTensorHook.java
similarity index 90%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorTensorHook.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/TensorTensorHook.java
index da05e5ea79c..d4494b6e2e0 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorTensorHook.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TensorTensorHook.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
@@ -6,7 +6,6 @@
 import org.bytedeco.javacpp.annotation.ByRef;
 import org.bytedeco.javacpp.annotation.ByVal;
 import org.bytedeco.javacpp.annotation.Properties;
-import org.bytedeco.pytorch.TensorBase;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class TensorTensorHook extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorTensorRefHook.java b/pytorch/src/main/java/org/bytedeco/pytorch/TensorTensorRefHook.java
similarity index 90%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorTensorRefHook.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/TensorTensorRefHook.java
index 40f72a45f54..decb65061d4 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TensorTensorRefHook.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TensorTensorRefHook.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
@@ -6,7 +6,6 @@
 import org.bytedeco.javacpp.annotation.ByRef;
 import org.bytedeco.javacpp.annotation.Const;
 import org.bytedeco.javacpp.annotation.Properties;
-import org.bytedeco.pytorch.TensorBase;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class TensorTensorRefHook extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/TransformerActivation.java b/pytorch/src/main/java/org/bytedeco/pytorch/TransformerActivation.java
index 98b5ad1ad65..c82f2e3b56f 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/TransformerActivation.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TransformerActivation.java
@@ -1,6 +1,5 @@
 package org.bytedeco.pytorch;
 
-import org.bytedeco.pytorch.functions.*;
 import org.bytedeco.javacpp.*;
 import org.bytedeco.javacpp.annotation.*;
 
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeMapper.java b/pytorch/src/main/java/org/bytedeco/pytorch/TypeMapper.java
similarity index 89%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeMapper.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/TypeMapper.java
index cf253f1f7c8..553f5f72790 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeMapper.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TypeMapper.java
@@ -1,10 +1,9 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.*;
-import org.bytedeco.pytorch.Type;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class TypeMapper extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeParser.java b/pytorch/src/main/java/org/bytedeco/pytorch/TypeParser.java
similarity index 95%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeParser.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/TypeParser.java
index 200825857af..3629d9a731f 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeParser.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TypeParser.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.BytePointer;
 import org.bytedeco.javacpp.FunctionPointer;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypePrinter.java b/pytorch/src/main/java/org/bytedeco/pytorch/TypePrinter.java
similarity index 81%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/TypePrinter.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/TypePrinter.java
index f83ce7b650f..147d4bd5d1c 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypePrinter.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TypePrinter.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
@@ -7,8 +7,6 @@
 import org.bytedeco.javacpp.annotation.ByVal;
 import org.bytedeco.javacpp.annotation.Const;
 import org.bytedeco.javacpp.annotation.Properties;
-import org.bytedeco.pytorch.StringOptional;
-import org.bytedeco.pytorch.Type;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class TypePrinter extends FunctionPointer {
@@ -29,6 +27,6 @@ protected TypePrinter() {
 
     private native void allocate();
 
-    // std::function(const c10::Type&)>
+    // std::function(const c10::Type&)>
     public native @ByVal StringOptional call(@Const @ByRef Type type);
 }
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeRenamer.java b/pytorch/src/main/java/org/bytedeco/pytorch/TypeRenamer.java
similarity index 86%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeRenamer.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/TypeRenamer.java
index 0094edb2793..126c0bb6e22 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeRenamer.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TypeRenamer.java
@@ -1,11 +1,9 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.*;
-import org.bytedeco.pytorch.ClassType;
-import org.bytedeco.pytorch.QualifiedName;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class TypeRenamer extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeResolver.java b/pytorch/src/main/java/org/bytedeco/pytorch/TypeResolver.java
similarity index 85%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeResolver.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/TypeResolver.java
index 629c0050107..afb413838ea 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeResolver.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TypeResolver.java
@@ -1,11 +1,9 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.*;
-import org.bytedeco.pytorch.QualifiedName;
-import org.bytedeco.pytorch.StrongTypePtr;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class TypeResolver extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeSupplier.java b/pytorch/src/main/java/org/bytedeco/pytorch/TypeSupplier.java
similarity index 94%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeSupplier.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/TypeSupplier.java
index 4d69ec28b5b..589a00b0f01 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/TypeSupplier.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/TypeSupplier.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/ValueMapper.java b/pytorch/src/main/java/org/bytedeco/pytorch/ValueMapper.java
similarity index 89%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/ValueMapper.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/ValueMapper.java
index 86c52f934a1..065560d0a4c 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/ValueMapper.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/ValueMapper.java
@@ -1,10 +1,9 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.*;
-import org.bytedeco.pytorch.Value;
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
 public class ValueMapper extends FunctionPointer {
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/functions/VoidTensorHook.java b/pytorch/src/main/java/org/bytedeco/pytorch/VoidTensorHook.java
similarity index 89%
rename from pytorch/src/main/java/org/bytedeco/pytorch/functions/VoidTensorHook.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/VoidTensorHook.java
index 0f5b0ee53c2..e9a9c6a121f 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/functions/VoidTensorHook.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/VoidTensorHook.java
@@ -1,11 +1,10 @@
-package org.bytedeco.pytorch.functions;
+package org.bytedeco.pytorch;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
 import org.bytedeco.javacpp.Pointer;
 import org.bytedeco.javacpp.annotation.Properties;
 import org.bytedeco.javacpp.annotation.ByVal;
-import org.bytedeco.pytorch.TensorBase;
 
 
 @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/WeakPtr.java b/pytorch/src/main/java/org/bytedeco/pytorch/WeakPtr.java
new file mode 100644
index 00000000000..81b50da7ddd
--- /dev/null
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/WeakPtr.java
@@ -0,0 +1,12 @@
+package org.bytedeco.pytorch;
+
+import org.bytedeco.javacpp.annotation.Adapter;
+
+import java.lang.annotation.*;
+
+@Documented @Retention(RetentionPolicy.RUNTIME)
+@Target({ElementType.METHOD, ElementType.PARAMETER})
+@Adapter("WeakPtrAdapter")
+public @interface WeakPtr {
+    String value() default "";
+}
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/WorkInfoConsumer.java b/pytorch/src/main/java/org/bytedeco/pytorch/WorkInfoConsumer.java
new file mode 100644
index 00000000000..0ef748bc3f0
--- /dev/null
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/WorkInfoConsumer.java
@@ -0,0 +1,31 @@
+package org.bytedeco.pytorch;
+
+import org.bytedeco.javacpp.FunctionPointer;
+import org.bytedeco.javacpp.Loader;
+import org.bytedeco.javacpp.Pointer;
+import org.bytedeco.javacpp.annotation.Cast;
+import org.bytedeco.javacpp.annotation.Properties;
+import org.bytedeco.javacpp.annotation.SharedPtr;
+
+@Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
+public class WorkInfoConsumer extends FunctionPointer {
+    static {
+        Loader.load();
+    }
+
+    /**
+     * Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}.
+     */
+    public WorkInfoConsumer(Pointer p) {
+        super(p);
+    }
+
+    protected WorkInfoConsumer() {
+        allocate();
+    }
+
+    private native void allocate();
+
+    // std::function)>
+    public native void call(@SharedPtr @Cast({"", "std::shared_ptr"}) WorkInfo wi);
+}
\ No newline at end of file
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/cuda/functions/AllocatorTraceTracker.java b/pytorch/src/main/java/org/bytedeco/pytorch/cuda/AllocatorTraceTracker.java
similarity index 94%
rename from pytorch/src/main/java/org/bytedeco/pytorch/cuda/functions/AllocatorTraceTracker.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/cuda/AllocatorTraceTracker.java
index 8e35350649b..6743a6fc8dd 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/cuda/functions/AllocatorTraceTracker.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/cuda/AllocatorTraceTracker.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.cuda.functions;
+package org.bytedeco.pytorch.cuda;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/cuda/functions/OutOfMemoryObserver.java b/pytorch/src/main/java/org/bytedeco/pytorch/cuda/OutOfMemoryObserver.java
similarity index 94%
rename from pytorch/src/main/java/org/bytedeco/pytorch/cuda/functions/OutOfMemoryObserver.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/cuda/OutOfMemoryObserver.java
index 557265eb2fb..0bb5987847b 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/cuda/functions/OutOfMemoryObserver.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/cuda/OutOfMemoryObserver.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.cuda.functions;
+package org.bytedeco.pytorch.cuda;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/cuda/functions/StreamFilter.java b/pytorch/src/main/java/org/bytedeco/pytorch/cuda/StreamFilter.java
similarity index 93%
rename from pytorch/src/main/java/org/bytedeco/pytorch/cuda/functions/StreamFilter.java
rename to pytorch/src/main/java/org/bytedeco/pytorch/cuda/StreamFilter.java
index 92bbd21c28b..83d63247577 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/cuda/functions/StreamFilter.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/cuda/StreamFilter.java
@@ -1,4 +1,4 @@
-package org.bytedeco.pytorch.cuda.functions;
+package org.bytedeco.pytorch.cuda;
 
 import org.bytedeco.javacpp.FunctionPointer;
 import org.bytedeco.javacpp.Loader;
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/presets/gloo.java b/pytorch/src/main/java/org/bytedeco/pytorch/presets/gloo.java
new file mode 100644
index 00000000000..043a975db04
--- /dev/null
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/presets/gloo.java
@@ -0,0 +1,87 @@
+/*
+ * Copyright (C) 2024 Hervé Guillemet
+ *
+ * Licensed either under the Apache License, Version 2.0, or (at your option)
+ * under the terms of the GNU General Public License as published by
+ * the Free Software Foundation (subject to the "Classpath" exception),
+ * either version 2, or any later version (collectively, the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *     http://www.gnu.org/licenses/
+ *     http://www.gnu.org/software/classpath/license.html
+ *
+ * or as provided in the LICENSE.txt file that accompanied this code.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.bytedeco.pytorch.presets;
+
+import org.bytedeco.javacpp.ClassProperties;
+import org.bytedeco.javacpp.LoadEnabled;
+import org.bytedeco.javacpp.annotation.Properties;
+import org.bytedeco.javacpp.presets.chrono;
+import org.bytedeco.javacpp.tools.*;
+
+/**
+ * @author Hervé Guillemet
+ */
+@Properties(
+    inherit =  { torch.class, chrono.class },
+    target = "org.bytedeco.pytorch.gloo",
+    global = "org.bytedeco.pytorch.global.gloo"
+)
+public class gloo implements LoadEnabled, InfoMapper {
+
+    @Override
+    public void init(ClassProperties properties) {
+        torch.initIncludes(getClass(), properties);
+    }
+
+    @Override
+    public void map(InfoMap infoMap) {
+
+        //// Instantiation of class templates.
+        infoMap
+            .put(new Info("gloo::ReductionFunction").pointerTypes("ReductionFunctionFloat"))
+            .put(new Info("gloo::ReductionFunction").pointerTypes("ReductionFunctionInt"))
+        ;
+
+        //// Hopefully will skip only the initializers, not the fields:
+        infoMap
+            .put(new Info("ReductionFunction::sum").skip())
+            .put(new Info("ReductionFunction::product").skip())
+            .put(new Info("ReductionFunction::min").skip())
+            .put(new Info("ReductionFunction::max").skip())
+        ;
+
+        //// Renaming to avoid clashes
+        infoMap
+            .put(new Info("gloo::transport::Context").pointerTypes("TransportContext"))
+        ;
+
+	//// Not exported
+	infoMap
+	    .put(new Info("gloo::Slot").skip())
+        ;
+
+        infoMap
+            .put(new Info("__CUDA_ARCH__").define(false))
+        ;
+
+        infoMap.put(new Info("gloo::kOnDeviceThreshold").javaText("public static final long kOnDeviceThreshold = 256 * 1024;"));
+
+        new torch.PointerInfo("gloo::transport::Context").javaBaseName("TransportContext").makeShared(infoMap);
+        new torch.PointerInfo("gloo::transport::Device").makeShared(infoMap);
+
+        //// Unsure if instantiating these templates could have any interest
+        //// for a use from Pytorch
+        infoMap
+            .put(new Info("gloo::sum", "gloo::product", "gloo::max", "gloo::min").skip())
+        ;
+    }
+}
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/presets/torch.java b/pytorch/src/main/java/org/bytedeco/pytorch/presets/torch.java
index 9439a59cdf9..1f451327d0c 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/presets/torch.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/presets/torch.java
@@ -42,6 +42,7 @@
 import org.bytedeco.javacpp.annotation.Platform;
 import org.bytedeco.javacpp.annotation.Properties;
 
+import org.bytedeco.javacpp.presets.chrono;
 import org.bytedeco.javacpp.tools.BuildEnabled;
 import org.bytedeco.javacpp.tools.Info;
 import org.bytedeco.javacpp.tools.InfoMap;
@@ -54,16 +55,21 @@
  * @author Samuel Audet, Hervé Guillemet
  */
 @Properties(
-    inherit = openblas.class,
+    inherit = { openblas.class, chrono.class },
     value = {
         @Platform(
             value = {"linux", "macosx", "windows"},
             compiler = "cpp17",
-            define = {"SHARED_PTR_NAMESPACE std", "UNIQUE_PTR_NAMESPACE std"},
+	        // __WINSOCKAPI_ fixes compilation error on windows due to
+	        // inclusion of both V1 and V2 of winsock API.
+            define = {"SHARED_PTR_NAMESPACE std", "UNIQUE_PTR_NAMESPACE std", "USE_C10D_GLOO", "_WINSOCKAPI_"},
             include = {
                 "torch/torch.h",
                 "torch/script.h",
                 "torch/csrc/inductor/aoti_runner/model_container_runner_cpu.h",
+                "torch/csrc/distributed/c10d/ProcessGroupGloo.hpp",
+                "torch/csrc/distributed/c10d/PrefixStore.hpp",
+                "torch/csrc/distributed/c10d/logger.hpp",
 
                 // For inclusion in JNI only, not parsed (compiler needs some complete definitions)
                 "torch/csrc/jit/runtime/instruction.h",
@@ -73,30 +79,52 @@
                 "torch/csrc/jit/serialization/storage_context.h",
 
                 "datasets.h",
-                "pytorch_adapters.h"
+                "pytorch_adapters.h",
+
+		        // Fix link error on Windows:
+		        "gloo/common/logging.cc"
+
             },
             exclude = {"openblas_config.h", "cblas.h", "lapacke_config.h", "lapacke_mangling.h", "lapack.h", "lapacke.h", "lapacke_utils.h"},
-            link = {"c10", "torch_cpu", "torch"},
-            preload = {"gomp@.1", "iomp5", "omp", "tbb@.2", "asmjit", "fbgemm"}
+            link = { "c10", "torch", "torch_cpu" }
         ),
         @Platform(
             value = {"linux", "macosx", "windows"},
-            link = { "c10", "c10_cuda", "torch_cpu", "torch_cuda", "torch" },
-            preload = {"gomp@.1", "iomp5", "omp", "tbb@.2", "asmjit", "fbgemm", "cupti@.12"},
-            includepath = {"/usr/local/cuda/include", "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.3/include/"},
+            includepath = {"/usr/local/cuda/include", "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/include/"},
             preloadpath = {
-                "/usr/local/cuda-12.3/lib64/",
-                "/usr/local/cuda-12.3/extras/CUPTI/lib64/",
+                "/usr/local/cuda-12.6/lib64/",
+                "/usr/local/cuda-12.6/extras/CUPTI/lib64/",
                 "/usr/local/cuda/lib64/",
                 "/usr/local/cuda/extras/CUPTI/lib64/",
                 "/usr/lib64/",
-                "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.3/lib/x64/",
-                "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.3/extras/CUPTI/lib64/",
+                "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/",
+                "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/extras/CUPTI/lib64/",
                 "C:/Program Files/NVIDIA Corporation/NvToolsExt/bin/x64/",
             },
-
             extension = "-gpu"
         ),
+        @Platform(
+            value = "linux",
+            preload = { "gomp@.1" }
+        ),
+        @Platform(
+            value = "macosx",
+            preload = { "iomp5" }
+        ),
+        @Platform(
+            value = "windows",
+            preload = { "uv", "asmjit", "fbgemm" }
+        ),
+        @Platform(
+            value = "linux",
+            extension = "-gpu",
+            preload = { "gomp@.1", "c10_cuda", "torch_cuda" }
+        ),
+        @Platform(
+            value = "windows",
+            extension = "-gpu",
+            preload = { "uv", "asmjit", "fbgemm", "c10_cuda", "torch_cuda" }
+        )
     },
     target = "org.bytedeco.pytorch",
     global = "org.bytedeco.pytorch.global.torch"
@@ -146,11 +174,13 @@ public void init(ClassProperties properties) {
         if (!Loader.isLoadLibraries() || extension == null || !extension.endsWith("-gpu")) {
             return;
         }
+
+        // when built for CUDA, even torch_cpu links with at least cupti and cudart, for some reason
         int i = 0;
         if (platform.startsWith("windows")) {
             preloads.add(i++, "zlibwapi");
         }
-        String[] libs = {"cudart", "cublasLt", "cublas", "cufft", "curand", "nvJitLink", "cusparse", "cusolver",
+        String[] libs = {"cudart", "cublasLt", "cublas", "cufft", "cupti", "curand", "nvJitLink", "cusparse", "cusolver",
                          "cudnn", "nccl", "nvrtc", "nvrtc-builtins", "myelin", "nvinfer",
                          "cudnn_graph", "cudnn_engines_precompiled", "cudnn_engines_runtime_compiled",
                          "cudnn_heuristic", "cudnn_ops", "cudnn_adv", "cudnn_cnn"};
@@ -176,6 +206,7 @@ public void init(ClassProperties properties) {
                     : lib.equals("nvrtc") ? "64_120_0"
                     : lib.equals("nvrtc-builtins") ? "64_126"
                     : lib.equals("nvJitLink") ? "_120_0"
+                    : lib.equals("cupti") ? "64_2024.3.0"
                     : "64_12";
             } else {
                 continue; // no CUDA
@@ -228,7 +259,6 @@ public void mapModule(InfoMap infoMap, String name, String base, String baseBase
                .put(new Info("torch::nn::ModuleHolder").skip())
                .put(new Info("torch::nn::" + name).skip())
                .put(new Info("torch::nn::Module::as").javaNames("as" + name));
-        ;
 
         if (anyModuleCompatible) {
             infoMap
@@ -250,7 +280,8 @@ public static void sharedMap(InfoMap infoMap) {
         infoMap
             .put(new Info().enumerate().friendly())
             .put(new Info("auto", "c10::reverse_iterator", "ska::flat_hash_map", /*"std::atomic", */"std::conditional", "std::iterator_traits",
-                "std::initializer_list", "std::integral_constant", "std::mutex", "std::reverse_iterator", "std::weak_ptr").skip())
+                "std::initializer_list", "std::integral_constant", "std::mutex", "std::reverse_iterator" /*, "std::weak_ptr"*/).skip())
+            .put(new Info("basic/containers").cppTypes("torch::optional"))
         ;
 
         //// Macros
@@ -311,23 +342,24 @@ public void map(InfoMap infoMap) {
 
             .put(new Info().javaText("import org.bytedeco.pytorch.Allocator;"))
             .put(new Info().javaText("import org.bytedeco.pytorch.Function;"))
-            .put(new Info().javaText("import org.bytedeco.pytorch.functions.*;"))
             .put(new Info().javaText("import org.bytedeco.pytorch.Module;"))
             .put(new Info().javaText("import org.bytedeco.javacpp.annotation.Cast;"))
 
-            .put(new Info("basic/containers").cppTypes("c10::optional", "torch::optional"))
             .put(new Info("std::nullptr_t").cast().pointerTypes("PointerPointer"))
 
             .put(new Info("at::CheckedFrom").cast().valueTypes("BytePointer", "String").pointerTypes("PointerPointer")) // Alias to const char*
             .put(new Info("c10::IValue", "at::IValue", "decltype(auto)").pointerTypes("IValue"))
             //             .put(new Info("c10::IValue::operator ==").skip()) // Possible name conflict with IValue.equals
-            .put(new Info("std::size_t", "c10::Dict::size_type",
-                "c10::Dict::size_type").cast().valueTypes("long").pointerTypes("SizeTPointer"))
+            .put(new Info(
+                "std::size_t",
+                "c10::Dict::size_type",
+                "c10::Dict::size_type",
+                "c10::Dict::size_type"
+            ).cast().valueTypes("long").pointerTypes("SizeTPointer"))
             .put(new Info("c10::approx_time_t").cast().valueTypes("long").pointerTypes("LongPointer"))
             .put(new Info("c10::ClassType::Property").pointerTypes("ClassType.Property"))
 
             .put(new Info("at::RecordFunctionHandle").valueTypes("long"))
-            .put(new Info("c10::ivalue::Future::FutureError::FutureError").skip()) // This constructor takes a std::string&&  but parser sends a std::string&
             .put(new Info("operator const std::string&()").javaText( // Hopefully targets the one in ConstantString only
                 "public native @Const @ByRef @Name(\"operator const std::string&\") @StdString @Override String toString();"
             ))
@@ -377,93 +409,96 @@ public void map(InfoMap infoMap) {
             .put(new Info("torch::jit::PickleOpCode").enumerate().translate(false).valueTypes("PickleOpCode"))
         ;
 
-        //// c10::optional
+        //// std::optional
         infoMap
-            .put(new Info("c10::optional").pointerTypes("BoolOptional").define())
-            .put(new Info("c10::optional", "c10::optional").pointerTypes("ByteOptional").define())
-            .put(new Info("c10::optional", "c10::optional").pointerTypes("IntOptional").define())
-            .put(new Info("c10::optional", "c10::remove_symint >::type").pointerTypes("LongOptional").define())
-            .put(new Info("c10::optional").pointerTypes("FloatOptional").define())
-            .put(new Info("c10::optional").pointerTypes("DoubleOptional").define())
-            .put(new Info("c10::optional").pointerTypes("SizeTOptional").define())
-            .put(new Info("c10::optional").pointerTypes("StringOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("BoolVectorOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("LongVectorOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("DoubleVectorOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("SizeTVectorOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("StringVectorOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("StrideVectorOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("ShapeSymbolVectorOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("TensorVectorOptional").define())
-            .put(new Info("c10::optional", "c10::optional", "c10::optional").pointerTypes("DeviceOptional").define())
-            .put(new Info("c10::optional").pointerTypes("DeviceTypeOptional").define())
-            .put(new Info("c10::optional >", "c10::optional", "c10::optional",
-                "c10::OptionalArrayRef", "c10::OptionalIntArrayRef", "at::OptionalIntArrayRef", "c10::remove_symint::type")
+            .put(new Info("std::optional").pointerTypes("BoolOptional").define())
+            .put(new Info("std::optional", "std::optional").pointerTypes("ByteOptional").define())
+            .put(new Info("std::optional", "std::optional").pointerTypes("IntOptional").define())
+            .put(new Info("std::optional", "c10::remove_symint >::type").pointerTypes("LongOptional").define())
+            .put(new Info("std::optional").pointerTypes("FloatOptional").define())
+            .put(new Info("std::optional").pointerTypes("DoubleOptional").define())
+            .put(new Info("std::optional").pointerTypes("SizeTOptional").define())
+            .put(new Info("std::optional").pointerTypes("StringOptional").define())
+            .put(new Info("std::optional >").pointerTypes("BoolVectorOptional").define())
+            .put(new Info("std::optional >").pointerTypes("LongVectorOptional").define())
+            .put(new Info("std::optional >").pointerTypes("DoubleVectorOptional").define())
+            .put(new Info("std::optional >").pointerTypes("SizeTVectorOptional").define())
+            .put(new Info("std::optional >").pointerTypes("StringVectorOptional").define())
+            .put(new Info("std::optional >").pointerTypes("StrideVectorOptional").define())
+            .put(new Info("std::optional >").pointerTypes("ShapeSymbolVectorOptional").define())
+            .put(new Info("std::optional >", "std::optional >").pointerTypes("TensorVectorOptional").define())
+            .put(new Info("std::optional", "std::optional", "std::optional", "optional").pointerTypes("DeviceOptional").define())
+            .put(new Info("std::optional").pointerTypes("DeviceTypeOptional").define())
+            .put(new Info("std::optional >", "std::optional", "std::optional",
+                "at::OptionalIntArrayRef", "c10::remove_symint::type")
                 // This second pointer type prevents optional.swap to work. I don't know exactly why. Skipping swap for now.
                 .pointerTypes("LongArrayRefOptional", "@Cast({\"int64_t*\", \"c10::ArrayRef\", \"std::vector&\"}) @StdVector long...").define())
-            .put(new Info("c10::optional >::swap").skip())
-            .put(new Info("c10::optional >", "c10::optional >",
-                "c10::OptionalArrayRef")
+            .put(new Info("std::optional >::swap").skip())
+            .put(new Info("std::optional >", "std::optional >")
                 .pointerTypes("DoubleArrayRefOptional", "@Cast({\"double*\", \"c10::ArrayRef\", \"std::vector&\"}) @StdVector double...").define())
-            .put(new Info("c10::optional >", "c10::optional >",
-                "c10::OptionalArrayRef", "c10::OptionalSymIntArrayRef", "at::OptionalSymIntArrayRef", "c10::optional").pointerTypes("SymIntArrayRefOptional").define())
-            .put(new Info("c10::optional", "c10::optional").pointerTypes("LayoutOptional").define())
-            .put(new Info("c10::optional", "c10::optional").pointerTypes("MemoryFormatOptional").define())
-            .put(new Info("c10::optional", "c10::optional").pointerTypes("ScalarOptional").define())
-            .put(new Info("c10::optional", "c10::optional", "c10::optional").pointerTypes("ScalarTypeOptional").define())
-            .put(new Info("c10::optional").pointerTypes("AliasInfoOptional").define())
-            .put(new Info("c10::optional").pointerTypes("IValueOptional").define())
-            .put(new Info("c10::optional").pointerTypes("CppSignatureOptional").define())
-            .put(new Info("c10::optional").pointerTypes("DispatchKeyOptional").define())
-            .put(new Info("c10::optional").pointerTypes("OperatorHandleOptional").define())
-            .put(new Info("c10::optional").pointerTypes("OperatorNameOptional").define())
-            .put(new Info("c10::optional").pointerTypes("QualifiedNameOptional").define())
-            .put(new Info("c10::optional").pointerTypes("StreamOptional").define())
-            .put(new Info("c10::optional").pointerTypes("StrideOptional").define())
-            .put(new Info("c10::optional").pointerTypes("TypePtrOptional").define())
-            .put(new Info("c10::optional").pointerTypes("ClassTypePropertyOptional").define())
-            .put(new Info("c10::optional").pointerTypes("AliasTypeSetOptional").define())
-            .put(new Info("c10::optional").pointerTypes("FunctionSchemaOptional").define())
-            .put(new Info("c10::optional", "c10::optional").pointerTypes("SymDimVectorOptional").define())
-            .put(new Info("c10::optional").pointerTypes("SymIntOptional").define())
-            .put(new Info("c10::optional").pointerTypes("IValueOptional").define())
-            .put(new Info("c10::optional").pointerTypes("DimVectorOptional").define())
-            .put(new Info("c10::optional").pointerTypes("DimnameOptional").define())
-            .put(new Info("c10::optional").pointerTypes("DimnameListOptional").define())
-            .put(new Info("c10::optional").pointerTypes("GeneratorOptional").define())
-            .put(new Info("c10::optional", "c10::optional", "c10::optional", "c10::optional", "c10::optional").pointerTypes("TensorOptional").define())
-            .put(new Info("c10::optional", "c10::optional").pointerTypes("TensorArrayRefOptional").define())
-            .put(new Info("c10::optional").pointerTypes("TypeMetaOptional").define())
-            .put(new Info("c10::optional").pointerTypes("ExecutorExecutionModeOptional").define())
-            .put(new Info("c10::optional::operator ->").skip()) // Returns a pointer to ExecutorExecutionMode, which is an enum
-            .put(new Info("const c10::optional", "c10::optional",
-                "c10::optional").cast().pointerTypes("InlinedCallStackOptional").define())
-            .put(new Info("c10::optional",
-                "c10::optional").cast().pointerTypes("ScopeOptional").define())
-            .put(new Info("c10::optional").pointerTypes("ModuleInstanceInfoOptional").define())
-            .put(new Info("c10::optional").pointerTypes("SourceRangeOptional").define())
-            .put(new Info("c10::optional").pointerTypes("MethodOptional").define())
-            .put(new Info("c10::optional", "c10::optional").pointerTypes("NamedValueOptional").define())
-            .put(new Info("c10::optional").pointerTypes("ValueOptional").define())
-            .put(new Info("c10::optional >",
-                "c10::optional >",
-                "c10::optional >").cast().pointerTypes("LongExpandingArrayOptional").define())
-            .put(new Info("c10::optional >",
-                "c10::optional >",
-                "c10::optional >",
-                "c10::optional::ExpandingArrayDouble>",
-                "c10::optional::ExpandingArrayDouble>",
-                "c10::optional::ExpandingArrayDouble>").cast().pointerTypes("DoubleExpandingArrayOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("T_StringSizeTSizeT_TOptional").define())
+            .put(new Info("std::optional >", "std::optional >",
+                "std::optional", "at::OptionalSymIntArrayRef").pointerTypes("SymIntArrayRefOptional").define())
+            .put(new Info("std::optional", "std::optional", "optional").pointerTypes("LayoutOptional").define())
+            .put(new Info("std::optional", "std::optional").pointerTypes("MemoryFormatOptional").define())
+            .put(new Info("std::optional", "std::optional").pointerTypes("ScalarOptional").define())
+            .put(new Info("std::optional", "std::optional", "std::optional", "optional", "optional").pointerTypes("ScalarTypeOptional").define())
+            .put(new Info("std::optional").pointerTypes("AliasInfoOptional").define())
+            .put(new Info("std::optional").pointerTypes("IValueOptional").define())
+            .put(new Info("std::optional").pointerTypes("CppSignatureOptional").define())
+            .put(new Info("std::optional").pointerTypes("DispatchKeyOptional").define())
+            .put(new Info("std::optional").pointerTypes("OperatorHandleOptional").define())
+            .put(new Info("std::optional").pointerTypes("OperatorNameOptional").define())
+            .put(new Info("std::optional").pointerTypes("QualifiedNameOptional").define())
+            .put(new Info("std::optional", "optional").pointerTypes("StreamOptional").define())
+            .put(new Info("std::optional").pointerTypes("StrideOptional").define())
+            .put(new Info("std::optional").pointerTypes("TypePtrOptional").define())
+            .put(new Info("std::optional").pointerTypes("ClassTypePropertyOptional").define())
+            .put(new Info("std::optional").pointerTypes("AliasTypeSetOptional").define())
+            .put(new Info("std::optional").pointerTypes("FunctionSchemaOptional").define())
+            .put(new Info("std::optional", "std::optional").pointerTypes("SymDimVectorOptional").define())
+            .put(new Info("std::optional").pointerTypes("SymIntOptional").define())
+            .put(new Info("std::optional").pointerTypes("IValueOptional").define())
+            .put(new Info("std::optional").pointerTypes("DimVectorOptional").define())
+            .put(new Info("std::optional").pointerTypes("DimnameOptional").define())
+            .put(new Info("std::optional").pointerTypes("DimnameListOptional").define())
+            .put(new Info("std::optional").pointerTypes("GeneratorOptional").define())
+            .put(new Info("std::optional", "std::optional", "std::optional", "std::optional", "std::optional").pointerTypes("TensorOptional").define())
+            .put(new Info("std::optional", "std::optional").pointerTypes("TensorArrayRefOptional").define())
+            .put(new Info("std::optional", "optional").pointerTypes("TypeMetaOptional").define())
+            .put(new Info("std::optional").pointerTypes("ExecutorExecutionModeOptional").define())
+            .put(new Info("std::optional::operator ->").skip()) // Returns a pointer to ExecutorExecutionMode, which is an enum
+            .put(new Info("const std::optional", "std::optional",
+                "std::optional").cast().pointerTypes("InlinedCallStackOptional").define())
+            .put(new Info("std::optional",
+                "std::optional").cast().pointerTypes("ScopeOptional").define())
+            .put(new Info("std::optional").pointerTypes("ModuleInstanceInfoOptional").define())
+            .put(new Info("std::optional").pointerTypes("SourceRangeOptional").define())
+            .put(new Info("std::optional").pointerTypes("MethodOptional").define())
+            .put(new Info("std::optional", "std::optional").pointerTypes("NamedValueOptional").define())
+            .put(new Info("std::optional").pointerTypes("ValueOptional").define())
+            .put(new Info("std::optional >",
+                "std::optional >",
+                "std::optional >").cast().pointerTypes("LongExpandingArrayOptional").define())
+            .put(new Info("std::optional >",
+                "std::optional >",
+                "std::optional >",
+                "std::optional::ExpandingArrayDouble>",
+                "std::optional::ExpandingArrayDouble>",
+                "std::optional::ExpandingArrayDouble>").cast().pointerTypes("DoubleExpandingArrayOptional").define())
+            .put(new Info("std::optional >").pointerTypes("T_StringSizeTSizeT_TOptional").define())
             .put(new Info("torch::optional >").pointerTypes("T_TensorTensor_TOptional").define())
-            .put(new Info("c10::optional >", "c10::optional >").pointerTypes("T_TypePtrLong_TOptional").cast().define())
-            .put(new Info("c10::optional").pointerTypes("StringViewOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("StringViewVectorOptional").define())
-            .put(new Info("c10::optional >", "c10::optional >")/*.cast?*/.pointerTypes("PointerPairOptional").define())
-            .put(new Info("c10::optional > >", "c10::optional >").pointerTypes("WeakStorageVectorOptional").define())
-            .put(new Info("c10::optional").pointerTypes("CppSignatureOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("SafePyObjectOptional").define())
-            .put(new Info("c10::optional >").pointerTypes("BytePointerPairOptional").define())
+            .put(new Info("std::optional >", "std::optional >").pointerTypes("T_TypePtrLong_TOptional").cast().define())
+            .put(new Info("std::optional").pointerTypes("StringViewOptional").define())
+            .put(new Info("std::optional >").pointerTypes("StringViewVectorOptional").define())
+            .put(new Info("std::optional >", "std::optional >")/*.cast?*/.pointerTypes("PointerPairOptional").define())
+            .put(new Info("std::optional > >", "std::optional >").pointerTypes("WeakStorageVectorOptional").define())
+            .put(new Info("std::optional").pointerTypes("CppSignatureOptional").define())
+            .put(new Info("std::optional >").pointerTypes("SafePyObjectOptional").define())
+            .put(new Info("std::optional >").pointerTypes("BytePointerPairOptional").define())
+            .put(new Info("std::optional >").pointerTypes("DistributedBackendOptional").define())
+            .put(new Info("std::optional >").pointerTypes("LoggerOptional").define())
+             //.put(new Info("std::optional >").pointerTypes("StringSupplierOptional").define()) // .get() of the optional would return a std::function
+            .put(new Info("std::optional > >", "std::optional >").pointerTypes("PyObject_TorchDispatchModeOptional").define())
         ;
 
 
@@ -609,7 +644,7 @@ public void map(InfoMap infoMap) {
             .put(new Info("std::array").pointerTypes("PointerPointer"))
             .put(new Info("std::array").cast().pointerTypes("FunctionalityOffsetAndMask"))
             .put(new Info("std::array").pointerTypes("IntPointer").cast())
-            .put(new Info("std::array >,at::COMPILE_TIME_MAX_DEVICE_TYPES>").pointerTypes("PointerPairOptional").cast())
+            .put(new Info("std::array >,at::COMPILE_TIME_MAX_DEVICE_TYPES>").pointerTypes("PointerPairOptional").cast())
             .put(new Info("std::array").pointerTypes("BytePointer").cast())
         ;
 
@@ -617,6 +652,7 @@ public void map(InfoMap infoMap) {
         //// std::vector
         infoMap
             .put(new Info("std::vector").pointerTypes("BoolVector").define())
+            .put(new Info("std::vector", "std::vector").pointerTypes("ByteVector").define().cast()) // cast to accomodate sign/unsigned
             .put(new Info("std::vector").pointerTypes("BytePointerVector").define())
             .put(new Info("std::vector", "std::tuple,std::vector >").cast().pointerTypes("LongVector").define())
             .put(new Info("std::vector").cast().pointerTypes("DoubleVector").define())
@@ -629,8 +665,8 @@ public void map(InfoMap infoMap) {
             .put(new Info("std::vector", "std::vector").pointerTypes("QEngineVector").define())
             .put(new Info("std::vector").pointerTypes("ScalarTypeVector").define())
             .put(new Info("std::vector").pointerTypes("SymbolVector").define())
-            .put(new Info("std::vector >").pointerTypes("LongOptionalVector").define())
-            .put(new Info("std::vector >").pointerTypes("IValueOptionalVector").define())
+            .put(new Info("std::vector >").pointerTypes("LongOptionalVector").define())
+            .put(new Info("std::vector >").pointerTypes("IValueOptionalVector").define())
             .put(new Info("std::vector >", "std::vector").pointerTypes("SharedClassTypeVector").define())
             .put(new Info("std::vector >", "std::vector",
                 "std::vector", "c10::AliasTypeSet").pointerTypes("TypeVector").define())
@@ -642,7 +678,7 @@ public void map(InfoMap infoMap) {
             .put(new Info("std::vector", "std::vector", "std::vector", "torch::autograd::variable_list")
                 .pointerTypes("TensorVector").define())
             .put(new Info("std::vector", "std::vector").pointerTypes("TensorIndexVector").define())
-            .put(new Info("std::vector >").pointerTypes("TensorOptionalVector").define())
+            .put(new Info("std::vector >").pointerTypes("TensorOptionalVector").define())
             .put(new Info("const std::vector >",
                 "std::vector >").pointerTypes("FunctionPreHookVector").define())
             .put(new Info("const std::vector >",
@@ -672,6 +708,11 @@ public void map(InfoMap infoMap) {
             .put(new Info("const std::vector >", "std::vector >").pointerTypes("WeakStorageVector").define())
             .put(new Info("std::vector").pointerTypes("TagVector").define())
             .put(new Info("std::vector >").pointerTypes("ReadAdapterInterfaceVector").define())
+            .put(new Info("std::vector >").pointerTypes("SizeTVectorVector").define())
+            .put(new Info("std::vector >", "std::vector").pointerTypes("LongArrayRefVector").define())
+            .put(new Info("std::vector >").pointerTypes("FutureVector").define())
+            .put(new Info("std::vector >").pointerTypes("SymNodeVector").define())
+            .put(new Info("std::vector >").pointerTypes("GlooDeviceVector").define())
         ;
 
 
@@ -692,7 +733,7 @@ public void map(InfoMap infoMap) {
             new ArrayInfo("EnumNameValue").elementTypes("c10::EnumNameValue"),
             new ArrayInfo("Float").itPointerType("FloatPointer").elementTypes("float").elementValueType("float"),
             new ArrayInfo("FloatComplex") /*.itPointerType("FloatPointer") */.elementTypes("c10::complex"),
-            new ArrayInfo("FuturePtr").elementTypes("c10::intrusive_ptr"),
+            new ArrayInfo("Future").elementTypes("c10::intrusive_ptr"),
             new ArrayInfo("Half") /*.itPointerType("ShortPointer") */.elementTypes("decltype(::c10::impl::ScalarTypeToCPPType<::c10::ScalarType::Half>::t)"),
             new ArrayInfo("IValue").elementTypes("c10::IValue", "const at::IValue").otherPointerTypes("IValueVector"),
             new ArrayInfo("Int")
@@ -701,11 +742,11 @@ public void map(InfoMap infoMap) {
                 .elementValueType("int"),
             new ArrayInfo("Tag").itPointerType("BytePointer").elementTypes("at::Tag"),
             new ArrayInfo("Long") // Warning : c10::IntArrayRef is a Java LongArrayRef and not a Java IntArrayRef
-                                  .otherCppNames("c10::IntArrayRef", "torch::IntArrayRef", "at::IntArrayRef", "c10::OptionalArray", "c10::remove_symint::type")
+                                  .otherCppNames("c10::IntArrayRef", "torch::IntArrayRef", "at::IntArrayRef", "c10::remove_symint::type")
                                   .itPointerType("LongPointer")
                                   .elementTypes("int64_t", "jlong") // Order is important, since ArrayRef and ArrayRef are incompatible, even though long == long long. And jlong is long long.
                                   .elementValueType("long"),
-            new ArrayInfo("LongOptional").elementTypes("c10::optional").otherPointerTypes("LongOptionalVector"),
+            new ArrayInfo("LongOptional").elementTypes("std::optional").otherPointerTypes("LongOptionalVector"),
             new ArrayInfo("NamedValue").elementTypes("torch::jit::NamedValue"),
             new ArrayInfo("Scalar").elementTypes("at::Scalar"),
             new ArrayInfo("ScalarType").itPointerType("@Cast(\"c10::ScalarType*\") BytePointer").elementTypes("c10::ScalarType", "at::ScalarType").otherPointerTypes("ScalarTypeVector"),
@@ -714,12 +755,12 @@ public void map(InfoMap infoMap) {
             new ArrayInfo("Stride").elementTypes("c10::Stride").otherPointerTypes("StrideVector"),
             new ArrayInfo("String").itPointerType("PointerPointer" /*"@Cast({\"\", \"std::string*\"}) @StdString BytePointer"*/).elementTypes("std::string").otherPointerTypes("StringVector"),
             new ArrayInfo("SymInt").otherCppNames("c10::SymIntArrayRef").elementTypes("c10::SymInt"),
-            new ArrayInfo("SymNode").elementTypes("c10::SymNode", "c10::intrusive_ptr"),
+            new ArrayInfo("SymNode").elementTypes("c10::intrusive_ptr", "c10::SymNode"),
             new ArrayInfo("Symbol").elementTypes("c10::Symbol").otherPointerTypes("SymbolVector"),
             new ArrayInfo("Tensor").otherCppNames("torch::TensorList", "at::TensorList", "at::ITensorListRef").elementTypes("torch::Tensor", "at::Tensor").otherPointerTypes("TensorVector"),  // Warning: not a TensorList (List)
             new ArrayInfo("TensorArg").elementTypes("torch::TensorArg", "at::TensorArg"),
             new ArrayInfo("TensorIndex").elementTypes("at::indexing::TensorIndex").otherPointerTypes("TensorIndexVector"),
-            new ArrayInfo("TensorOptional").elementTypes("c10::optional", "c10::optional", "c10::optional").otherPointerTypes("TensorOptionalVector"),
+            new ArrayInfo("TensorOptional").elementTypes("std::optional", "std::optional", "std::optional").otherPointerTypes("TensorOptionalVector"),
             new ArrayInfo("Type").itPointerType("Type.TypePtr").elementTypes("c10::TypePtr", "c10::Type::TypePtr").otherPointerTypes("TypeVector"),
             new ArrayInfo("Value").elementTypes("torch::jit::Value*").otherPointerTypes("ValueVector")
 
@@ -775,9 +816,9 @@ public void map(InfoMap infoMap) {
             new ArrayInfo("Boolean").elementTypes("bool").elementValueType("boolean"),
             new ArrayInfo("Long").elementTypes("int64_t").elementValueType("long"),
             new ArrayInfo("Double").elementTypes("double").elementValueType("double"),
-            new ArrayInfo("TensorOptional").elementTypes("c10::optional"),
+            new ArrayInfo("TensorOptional").elementTypes("std::optional"),
             new ArrayInfo("Tensor").elementTypes("at::Tensor"),
-            new ArrayInfo("FuturePtr").elementTypes("c10::intrusive_ptr"),
+            new ArrayInfo("Future").elementTypes("c10::intrusive_ptr").elementValueType("@IntrusivePtr(\"c10::ivalue::Future\") Future"),
             new ArrayInfo("Generic").elementTypes("c10::IValue").itPointerType("IValue").elementValueType("@ByVal IValue"),
         }) {
             ai.mapList(infoMap);
@@ -838,6 +879,7 @@ public void map(InfoMap infoMap) {
         infoMap
             .put(new Info("std::map").pointerTypes("StringStringMap").define())
             .put(new Info("std::map").pointerTypes("StringLongMap").define())
+            .put(new Info("std::map").pointerTypes("StringTensorMap").define()) // Used by distributed only
         ;
 
 
@@ -849,7 +891,11 @@ public void map(InfoMap infoMap) {
             .put(new Info("std::unordered_set", "std::unordered_set").pointerTypes("TensorImplSet").define())
             .put(new Info("std::unordered_set").pointerTypes("NodeSet").define())
             .put(new Info("std::unordered_set").pointerTypes("DeviceTypeSet").define())
+            .put(new Info("std::unordered_set", "std::unordered_set").pointerTypes("ShortSet").define())
             .put(new Info("std::set").pointerTypes("ActivityTypeSet").define())
+            .put(new Info("std::unordered_map").pointerTypes("SizeTStringMap").define())
+            // .put(new Info("std::unordered_map >").pointerTypes("LongRecvRpcBackwardMap").define()) // Not on windows
+            // .put(new Info("std::unordered_map >").pointerTypes("LongSendRpcBackwardMap").define())
         ;
 
 
@@ -864,8 +910,13 @@ public void map(InfoMap infoMap) {
             .put(new Info("std::unordered_map").pointerTypes("StringValueMap").define())
             .put(new Info("std::unordered_map").pointerTypes("ValueValueMap").define())
             .put(new Info("std::unordered_map").pointerTypes("ArgumentSpecExecutionPlanMap").define())
-            .put(new Info("std::unordered_map").pointerTypes("TreeRefStringMap").define())
+            .put(new Info("std::unordered_map", "std::unordered_map,std::string>").pointerTypes("TreeStringMap").define())
             .put(new Info("std::unordered_map").pointerTypes("StringIntMap").define())
+            .put(new Info(
+                "const std::unordered_map",
+                "std::unordered_map" // Fix erroneous ns qualification due to a previous `using Node::Node`
+            ).pointerTypes("NodeNodeCallMap").define())
+            .put(new Info("std::unordered_map").pointerTypes("HashIdentityIValueMap").define())
         ;
 
 
@@ -874,7 +925,7 @@ public void map(InfoMap infoMap) {
             .put(new Info("std::atomic_bool", "std::atomic").cast().valueTypes("boolean").pointerTypes("BoolPointer"))
             .put(new Info("std::atomic_uint64_t", "std::atomic", "std::atomic", "std::atomic_size_t", "std::atomic").cast().valueTypes("long").pointerTypes("LongPointer"))
             .put(new Info("std::atomic").cast().pointerTypes("DeviceGuardImplInterface"))
-            .put(new Info("std::atomic").cast().valueTypes("int").pointerTypes("IntPointer"));
+            .put(new Info("std::atomic").cast().valueTypes("int").pointerTypes("IntPointer"))
         ;
 
 
@@ -909,6 +960,9 @@ public void map(InfoMap infoMap) {
             .put(new Info("const std::tuple", "std::tuple").pointerTypes("T_DataPtrSizeT_T").define())
             .put(new Info("std::tuple", "std::pair").pointerTypes("T_TypePtrLong_T").define()) // Parse this pair as tuple because Parser doesn't generate valid code for optional
             .put(new Info("std::tuple,c10::impl::TorchDispatchModeKey>").pointerTypes("T_SafePyObjectTorchDispatchModeKey_T").define())
+            //.put(new Info("std::tuple,std::vector > >").pointerTypes("T_MessageWeakStorage_T").define()) // Message not on Windows
+            .put(new Info("std::tuple >,std::vector >").pointerTypes("T_SizeTVectorVectorSizeTVector_T").define())
+            .put(new Info("std::tuple,c10::impl::TorchDispatchModeKey>").pointerTypes("T_PyObject_TorchDispatchModeTorchDispatchModeKey_T").define())
         ;
 
 
@@ -938,33 +992,48 @@ public void map(InfoMap infoMap) {
                    .put(new Info(template("torch::jit::List", t[1]) + "::map").skip()) // Could map if needed
             ;
         }
-        infoMap.put(new Info("torch::jit::TreeList::const_iterator").cast().pointerTypes("TreeRef"));
+        infoMap.put(new Info("torch::jit::TreeList::const_iterator").cast().pointerTypes("Tree"));
 
 
         //// c10 Dict
+        for (String[] d : new String[][] {
+            { "c10::IValue", "c10::IValue", "Generic" },
+            { "std::string", "c10::impl::GenericList", "StringGenericList" },
+            { "torch::Tensor", "torch::Tensor", "TensorTensor" }
+        }) {
+            infoMap
+                .put(new Info(template("c10::Dict", d[0], d[1])).purify().pointerTypes(d[2] + "Dict"))
+                .put(new Info(template("c10::impl::DictEntryRef", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator")).pointerTypes("GenericDictEntryRef"))
+                .put(new Info(template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator"),
+                    template("c10::Dict", d[0], d[1]) + "::iterator").purify().pointerTypes(d[2] + "DictIterator").friendly())
+                //.put(new Info("c10::Dict(c10::TypePtr, c10::TypePtr)").skip())
+                // Don't know how to map :difference_type
+                .put(new Info(template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator") + "::operator -").skip())
+                /* Following operators throw a template error "no match", even in C++. */
+                .put(new Info(template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator")
+                              + "::operator <(const " + template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator")
+                              + "&, const " + template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator") + "&)").skip())
+                .put(new Info(template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator")
+                              + "::operator <=(const " + template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator")
+                              + "&, const " + template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator") + "&)").skip())
+                .put(new Info(template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator")
+                              + "::operator >=(const " + template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator")
+                              + "&, const " + template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator") + "&)").skip())
+                .put(new Info(template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator")
+                              + "::operator >(const " + template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator")
+                              + "&, const " + template("c10::impl::DictIterator", d[0], d[1], "c10::detail::DictImpl::dict_map_type::iterator") + "&)").skip())
+            ;
+        }
         infoMap
-            .put(new Info("c10::Dict").purify().pointerTypes("GenericDict"))
-            .put(new Info("c10::impl::DictEntryRef").pointerTypes("GenericDictEntryRef"))
-            .put(new Info("c10::impl::DictIterator",
-                "c10::Dict::iterator").purify().pointerTypes("GenericDictIterator").friendly())
-            .put(new Info("c10::Dict").pointerTypes("StringGenericListDict"))
-            .put(new Info("c10::Dict(c10::TypePtr, c10::TypePtr)").skip())
-            .put(new Info(
-                "c10::impl::DictIterator::operator -(const c10::impl::DictIterator&, const c10::impl::DictIterator&)",
-                "c10::impl::DictIterator::operator -").skip()) // Don't know how to map :difference_type
-
-            /* Following operators throw a template error "no match", even in C++. */
+            .put(new Info("c10::impl::DictIterator::operator -(const c10::impl::DictIterator&, const c10::impl::DictIterator&)").skip())
             .put(new Info("c10::Dict::iterator::operator <(const c10::Dict::iterator&, const c10::Dict::iterator&)").skip())
-            .put(new Info("c10::impl::DictIterator::operator <(const c10::impl::DictIterator&, const c10::impl::DictIterator&)").skip())
             .put(new Info("c10::Dict::iterator::operator <=(const c10::Dict::iterator&, const c10::Dict::iterator&)").skip())
-            .put(new Info("c10::impl::DictIterator::operator <=(const c10::impl::DictIterator&, const c10::impl::DictIterator&)").skip())
             .put(new Info("c10::Dict::iterator::operator >=(const c10::Dict::iterator&, const c10::Dict::iterator&)").skip())
-            .put(new Info("c10::impl::DictIterator::operator >=(const c10::impl::DictIterator&, const c10::impl::DictIterator&)").skip())
             .put(new Info("c10::Dict::iterator::operator >(const c10::Dict::iterator&, const c10::Dict::iterator&)").skip())
-            .put(new Info("c10::impl::DictIterator::operator >(const c10::impl::DictIterator&, const c10::impl::DictIterator&)").skip())
         ;
 
 
+
         //// torch::OrderedDict
         for (String[] o: new String[][] {
             { "std::string", "torch::Tensor", "StringTensor" },
@@ -992,16 +1061,18 @@ public void map(InfoMap infoMap) {
             .put(new Info("std::pair").pointerTypes("SizeTMatchedSchemaPair").define())
             .put(new Info("std::pair").pointerTypes("BytePointerPair").define())
             .put(new Info("std::pair").pointerTypes("EnumNameValue").define())
+            .put(new Info("std::pair").pointerTypes("IntPair").define())
         ;
 
-        //// Intrusive pointers
+
+        //// c10::intrusive_ptr
         /* We cannot define an adapter working like SharedPtrAdapter since there is no public constructor of
           intrusive_ptr taking a T*. */
         for (PointerInfo pi : new PointerInfo[]{
             new PointerInfo("at::Quantizer"),
             new PointerInfo("c10::GeneratorImpl"),
             new PointerInfo("c10::ivalue::Tuple"),
-            new PointerInfo("c10::ivalue::Future", "at::ivalue::Future"),
+            new PointerInfo("c10::ivalue::Future", "at::ivalue::Future", "torch::distributed::rpc::JitFuture"),
             new PointerInfo("c10::ivalue::ConstantString"),
             new PointerInfo("c10::ivalue::Await"),
             new PointerInfo("c10::ivalue::Object").javaBaseName("Obj"),
@@ -1011,51 +1082,46 @@ public void map(InfoMap infoMap) {
             new PointerInfo("c10::TensorImpl"),
             new PointerInfo("c10::TensorImpl,c10::UndefinedTensorImpl").javaBaseName("TensorImpl"),
             new PointerInfo("c10::StorageImpl", "c10::StorageImpl,NullType"),
-            new PointerInfo("c10::SymNodeImpl").javaName("SymNode"),
-            new PointerInfo("c10::BackendMeta").javaName("BackendMetaRef"), // Warning: BackendMetaPtr is sth different
-            new PointerInfo("torch::jit::Tree").javaName("TreeRef"),
+            new PointerInfo("c10::SymNodeImpl").javaBaseName("SymNode"),
+            new PointerInfo("c10::BackendMeta"), //.javaBaseName("BackendMetaRef"), // Warning: BackendMetaPtr is sth different
+            new PointerInfo("torch::jit::Tree").otherCppNames("torch::jit::TreeRef"),
+
+            new PointerInfo("c10d::Store"),
+            new PointerInfo("c10d::ProcessGroup::Options"),
+            new PointerInfo("c10d::Work"),
+            new PointerInfo("c10d::Backend").javaBaseName("DistributedBackend"),
+            new PointerInfo("c10d::_SupplementBase"),
+            new PointerInfo("c10d::ProcessGroup"),
+            new PointerInfo("intra_node_comm::IntraNodeComm"),
+            //new PointerInfo("torch::distributed::rpc::Message"), // Not on Windows
+            new PointerInfo("c10d::ProcessGroupGloo::AsyncWork"),
+            new PointerInfo("c10d::ProcessGroupGloo::Options"),
+            new PointerInfo("c10d::ProcessGroupGloo")
         }) {
-            String[] cppNames = new String[pi.argumentNames.length + pi.otherCppNames.length];
-            int i = 0;
-            for (String n : pi.argumentNames) {
-                String ipn = template("c10::intrusive_ptr", n);
-                cppNames[i++] = ipn;
-                // Skipping constructor taking a unique_ptr
-                infoMap.put(new Info(ipn + "(" + n + "*)").skip());
-                /* If we need to map a unique_ptr with this type, we need to disambiguate constructor
-                with something like:
-                infoMap.put(new Info(ipn + "(" + upn + ")").javaText(
-                        "public " + pi.javaName + "(" + xxx + " rhs) { super((Pointer)null); allocate(rhs); }\n" +
-                        "@NoException(true) private native void allocate(@Cast({\"\", \"" + upn + "\"}) @UniquePtr " + xxx + " rhs);"));
-                 */
-            }
-            for (String n : pi.otherCppNames)
-                cppNames[i++] = n;
-            infoMap.put(new Info(cppNames).pointerTypes(pi.javaName == null ? (pi.javaBaseName + "Ptr") : pi.javaName));
-
+        pi.makeIntrusive(infoMap);
         }
+        infoMap.put(new Info("c10::ivalue::Object").pointerTypes("Obj"));
+        infoMap.put(new Info("torch::distributed::rpc::JitFuture").pointerTypes("Future"));
+        infoMap.put(new Info("c10::SymNodeImpl").pointerTypes("SymNode"));
 
 
         //// Classes that Parser cannot detect as virtual
-        infoMap.put(new Info("c10::Error", "c10::IndexError", "c10::LinAlgError", "c10::ValueError", "c10::TypeError",
-            "c10::DistError", "c10::DistNetworkError", "c10::DistStoreError",
-            "c10::NotImplementedError", "c10::EnforceFiniteError", "c10::OutOfMemoryError", "c10::ErrorAlwaysShowCppStacktrace",
-            "c10::OnnxfiBackendSystemError", "c10::DistBackendError", "c10::SharedType", "c10::StrongTypePtr",
+        infoMap.put(new Info("c10::SharedType", "c10::StrongTypePtr",
             "c10::WeakTypePtr", "torch::autograd::CppFunctionPreHook", "torch::autograd::DifferentiableViewMeta",
             "torch::autograd::TraceableFunction", "torch::jit::Instruction", "torch::jit::Method", "torch::jit::ModuleInstanceInfo",
             "torch::jit::Object::Property", "torch::jit::OperatorSet", "torch::jit::SourceRangePickler", "torch::jit::Unpickler",
-            "torch::jit::Operator", "c10::CuDNNError").purify());
+            "torch::jit::Operator").purify());
 
 
         /// Classes skipped for various non-investigated reasons
         infoMap
-            .put(new Info("c10::guts::is_fundamental",
-                "c10::detail::CaptureKernelCall", "c10::detail::DictImpl", "c10::detail::MultiDispatchKeySet", "c10::ExclusivelyOwnedTraits", "c10::FunctionSchema::dump",
+            .put(new Info(
+                "c10::detail::MultiDispatchKeySet", "c10::ExclusivelyOwnedTraits", "c10::FunctionSchema::dump",
                 "c10::domain_prefix", "c10::C10FlagsRegistry", "c10::enforce_detail::EnforceFailMessage", "c10::impl::build_feature_required_feature_not_available",
                 "c10::detail::getMaybeFakeTypePtr_", "c10::complex_literals::operator \"\"_if", "c10::complex_literals::operator \"\"_id",
                 "decltype(::c10::impl::ScalarTypeToCPPType<::c10::ScalarType::ComplexHalf>::t)", "c10::BoxedKernel", "c10::ExtraMeta", "c10::remove_symint",
                 "c10::InefficientStdFunctionContext", "c10::DataPtr::move_context", "c10::detail::UniqueVoidPtr::move_context", "QuantizerPtr", "c10::IValue::toModule", "c10::toBackendComponent",
-                "c10::optional", "c10::asIntArrayRefSlow", "c10::standardizeVectorForUnion",
+                "std::optional", "c10::asIntArrayRefSlow", "c10::standardizeVectorForUnion",
                 "c10::impl::ExcludeDispatchKeyGuard", "c10::impl::ScalarTypeToCPPType", "c10::impl::AnnotatedKernel", "c10::impl::OperatorEntry",
                 "c10::StorageImpl(c10::StorageImpl)", "c10::StorageImpl::operator =",
                 "c10::TensorImpl(c10::TensorImpl)", "c10::TensorImpl::operator =",
@@ -1240,7 +1306,7 @@ public void map(InfoMap infoMap) {
                 "torch::data::samplers::DistributedSampler<>"
             ).purify().pointerTypes("DistributedSampler"))
             .put(new Info(
-                "const c10::optional", "c10::optional"
+                "const std::optional", "std::optional"
             ).pointerTypes("BatchSizeOptional").define())
 
             .put(new Info("torch::data::DataLoaderBase > >,torch::data::Example,std::vector >",
@@ -1287,7 +1353,6 @@ public void map(InfoMap infoMap) {
             {"Tensor", "torch::Tensor", "torch::data::example::NoTarget"}
         }) {
             String example = ex[2] == null ? template("torch::data::Example", ex[1]) : template("torch::data::Example", ex[1], ex[2]);
-            ;
             String p = ex[0];
             String chunkDataReader = template("torch::data::datasets::ChunkDataReader", example, template("std::vector", example));
             String mangledChunkDataReader = mangle(chunkDataReader);
@@ -1309,10 +1374,10 @@ public void map(InfoMap infoMap) {
                     template("std::vector", template("torch::data::datasets::Dataset", mangledJavaStreamDataset, example) + "::ExampleType"),
                     template("std::vector", template("torch::data::datasets::Dataset", mangledJavaStatefulDataset, example) + "::ExampleType")
                 ).pointerTypes(p + "ExampleVector").define())
-                .put(new Info(template("c10::optional", example)).pointerTypes(p + "ExampleOptional").define())
+                .put(new Info(template("std::optional", example)).pointerTypes(p + "ExampleOptional").define())
                 .put(new Info(
-                    template("c10::optional", template("std::vector", example)),
-                    template("c10::optional", mangledChunkDataReader + "::BatchType"),
+                    template("std::optional", template("std::vector", example)),
+                    template("std::optional", mangledChunkDataReader + "::BatchType"),
                     template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler") + "::BatchType",
                     mangledJavaStreamDataset + "::BatchType"
                 ).pointerTypes(p + "ExampleVectorOptional").define())
@@ -1361,15 +1426,15 @@ public void map(InfoMap infoMap) {
                     template("torch::data::datasets::StatefulDataset", template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler"), mangledChunkDataReader + "::BatchType", "size_t")
                 ).pointerTypes("ChunkStateful" + p + "Dataset"))
                 .put(new Info(
-                    template("torch::data::datasets::BatchDataset", template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler"), template("c10::optional", mangledChunkDataReader + "::BatchType"), "size_t"),
+                    template("torch::data::datasets::BatchDataset", template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler"), template("std::optional", mangledChunkDataReader + "::BatchType"), "size_t"),
                     template("torch::data::datasets::BatchDataset", template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler"), template("std::vector", example))
                 ).pointerTypes("Chunk" + p + "BatchDataset"))
                 .put(new Info(
-                    template("torch::data::datasets::BatchDataset", template("torch::data::datasets::SharedBatchDataset", template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler")), template("c10::optional", mangledChunkDataReader + "::BatchType"), "size_t"),
+                    template("torch::data::datasets::BatchDataset", template("torch::data::datasets::SharedBatchDataset", template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler")), template("std::optional", mangledChunkDataReader + "::BatchType"), "size_t"),
                     template("torch::data::datasets::BatchDataset", template("torch::data::datasets::SharedBatchDataset", template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler")), template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler") + "::BatchType", template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler") + "::BatchRequestType")
                 ).pointerTypes("ChunkBatchShared" + p + "BatchDataset"))
                 .put(new Info(
-                    template("torch::data::datasets::BatchDataset", template("torch::data::datasets::SharedBatchDataset", template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler")), template("c10::optional", mangledChunkDataReader + "::BatchType"), "size_t") + "::map"
+                    template("torch::data::datasets::BatchDataset", template("torch::data::datasets::SharedBatchDataset", template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler")), template("std::optional", mangledChunkDataReader + "::BatchType"), "size_t") + "::map"
                 ).javaText("public native @ByVal ChunkMap" + p + "Dataset map(@ByVal " + p + "ExampleStack transform);"))
                 .put(new Info(
                     template("torch::data::datasets::SharedBatchDataset", template("torch::data::datasets::ChunkDataset", mangledChunkDataReader, "torch::data::samplers::RandomSampler", "torch::data::samplers::RandomSampler"))
@@ -1471,7 +1536,7 @@ public void map(InfoMap infoMap) {
                     template("torch::data::DataLoaderBase", mangledJavaStatefulDataset, mangledJavaStatefulDataset + "::BatchType::value_type", mangledJavaStatefulDataset + "::BatchRequestType")
                 ).pointerTypes("JavaStateful" + p + "DataLoaderBase").purify())
                 .put(new Info(
-                    template("torch::data::datasets::BatchDataset", template("javacpp::StatefulDataset", ex[1], ex[2]), template("c10::optional", template("std::vector", example)), "size_t")
+                    template("torch::data::datasets::BatchDataset", template("javacpp::StatefulDataset", ex[1], ex[2]), template("std::optional", template("std::vector", example)), "size_t")
                 ).pointerTypes("JavaStateful" + p + "BatchDataset").purify())
             ;
         }
@@ -1662,9 +1727,7 @@ public void map(InfoMap infoMap) {
             if (i > 1) {
                 mapModule(infoMap, "FractionalMaxPool" + i + "d", "torch::nn::FractionalMaxPoolImpl<" + i + ",torch::nn::FractionalMaxPool" + i + "dImpl>");
             }
-            if (i < 4) {
-                mapModule(infoMap, "LPPool" + i + "d", "torch::nn::LPPoolImpl<" + i + ",torch::nn::LPPool" + i + "dImpl>");
-            }
+            mapModule(infoMap, "LPPool" + i + "d", "torch::nn::LPPoolImpl<" + i + ",torch::nn::LPPool" + i + "dImpl>");
         }
 
         mapModule(infoMap, "RNN", "torch::nn::detail::RNNImplBase");
@@ -1729,9 +1792,9 @@ public void map(InfoMap infoMap) {
                 "public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4);\n" +
                 "public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6);\n" +
                 "public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6, @Const @ByRef Tensor input7, @Const @ByRef Tensor input8);\n" +
-                "public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @ByRef(nullValue = \"c10::optional(c10::nullopt)\") @Cast({\"int64_t*\", \"c10::ArrayRef\", \"std::vector&\"}) @StdVector long... output_size);\n" +
-                "public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @Const @ByRef(nullValue = \"c10::optional(c10::nullopt)\") LongArrayRefOptional output_size);\n" +
-                "public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @Const @ByRef Tensor indices, @Const @ByRef(nullValue = \"c10::optional >(c10::nullopt)\") LongVectorOptional output_size);\n" +
+                "public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @ByRef(nullValue = \"std::optional(c10::nullopt)\") @Cast({\"int64_t*\", \"c10::ArrayRef\", \"std::vector&\"}) @StdVector long... output_size);\n" +
+                "public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @Const @ByRef(nullValue = \"std::optional(c10::nullopt)\") LongArrayRefOptional output_size);\n" +
+                "public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @Const @ByRef Tensor indices, @Const @ByRef(nullValue = \"std::optional >(c10::nullopt)\") LongVectorOptional output_size);\n" +
                 "public native @ByVal AnyValue any_forward(@Const @ByRef Tensor input, @ByVal(nullValue = \"torch::optional >{}\") T_TensorTensor_TOptional hx_opt);\n" +
                 "public native @ByVal AnyValue any_forward(@Const @ByRef Tensor query, @Const @ByRef Tensor key, @Const @ByRef Tensor value, @Const @ByRef(nullValue = \"torch::Tensor{}\") Tensor key_padding_mask, @Cast(\"bool\") boolean need_weights/*=true*/, @Const @ByRef(nullValue = \"torch::Tensor{}\") Tensor attn_mask, @Cast(\"bool\") boolean average_attn_weights/*=true*/);\n"
             ))
@@ -1742,9 +1805,9 @@ public void map(InfoMap infoMap) {
                 "public native @ByVal Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4);\n" +
                 "public native @ByVal Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6);\n" +
                 "public native @ByVal Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6, @Const @ByRef Tensor input7, @Const @ByRef Tensor input8);\n" +
-                "public native @ByVal Tensor forward(@Const @ByRef Tensor input, @ByRef(nullValue = \"c10::optional(c10::nullopt)\") @Cast({\"int64_t*\", \"c10::ArrayRef\", \"std::vector&\"}) @StdVector long... output_size);\n" +
-                "public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef(nullValue = \"c10::optional(c10::nullopt)\") LongArrayRefOptional output_size);\n" +
-                "public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef Tensor indices, @Const @ByRef(nullValue = \"c10::optional >(c10::nullopt)\") LongVectorOptional output_size);\n" +
+                "public native @ByVal Tensor forward(@Const @ByRef Tensor input, @ByRef(nullValue = \"std::optional(c10::nullopt)\") @Cast({\"int64_t*\", \"c10::ArrayRef\", \"std::vector&\"}) @StdVector long... output_size);\n" +
+                "public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef(nullValue = \"std::optional(c10::nullopt)\") LongArrayRefOptional output_size);\n" +
+                "public native @ByVal Tensor forward(@Const @ByRef Tensor input, @Const @ByRef Tensor indices, @Const @ByRef(nullValue = \"std::optional >(c10::nullopt)\") LongVectorOptional output_size);\n" +
                 "public native @ByVal @Name(\"forward>>\") T_TensorT_TensorTensor_T_T forwardT_TensorT_TensorTensor_T_T(@Const @ByRef Tensor input);\n" +
                 "public native @ByVal @Name(\"forward>>\") T_TensorT_TensorTensor_T_T forwardT_TensorT_TensorTensor_T_T(@Const @ByRef Tensor input, @ByVal(nullValue = \"torch::optional >{}\") T_TensorTensor_TOptional hx_opt);\n" +
                 "public native @ByVal @Name(\"forward>\") T_TensorTensor_T forwardT_TensorTensor_T(@Const @ByRef Tensor input);\n" +
@@ -1785,9 +1848,22 @@ public void map(InfoMap infoMap) {
             new PointerInfo("torch::jit::SugaredValue"),
             new PointerInfo("caffe2::serialize::ReadAdapterInterface"),
             new PointerInfo("c10::SafePyObject"),
+            //new PointerInfo("torch::distributed::autograd::SendRpcBackward"), // Not on Windows
+            //new PointerInfo("torch::distributed::autograd::RecvRpcBackward"),
+            new PointerInfo("c10d::Logger"), // Not sure if this class (and c10d::Reducer) has any use,
+            new PointerInfo("torch::distributed::autograd::DistAutogradContext"),
+            new PointerInfo("torch::jit::CompilationUnit"),
+            new PointerInfo("c10d::WorkInfo"),
+            new PointerInfo("c10::impl::PyObject_TorchDispatchMode"),
+            new PointerInfo("c10::LazyValue", "const c10::LazyValue").javaBaseName("Backtrace"),
+            new PointerInfo("c10::SafePyObjectT").javaBaseName("PyObject_TorchDispatchMode")
         }) {
             pi.makeShared(infoMap);
         }
+        // Disambiguate between candidate functions
+        infoMap.put(new Info("torch::dynamo::autograd::CompiledNodeArgs::collect(torch::autograd::Node::Node*)") // Really collect(const std::shared_ptr&)
+                .javaText("public native void collect(@Cast({\"\", \"const std::shared_ptr\"}) @SharedPtr Node t);"))
+               ;
 
 
         //// Classes handled with @UniquePtr
@@ -1829,6 +1905,25 @@ public void map(InfoMap infoMap) {
             .put(new Info("std::unique_ptr").skip()) // A class cannot be handled by both shared and unique ptr
         ;
 
+        // Already defined in gloo
+        infoMap
+            .put(new Info("std::shared_ptr<::gloo::transport::Device>").annotations("@SharedPtr").pointerTypes("org.bytedeco.pytorch.gloo.Device"))
+            .put(new Info("::gloo::transport::UnboundBuffer").pointerTypes("org.bytedeco.pytorch.gloo.UnboundBuffer"))
+            .put(new Info("::gloo::rendezvous::Store").pointerTypes("org.bytedeco.pytorch.gloo.Store"))
+            .put(new Info("::gloo::Context").pointerTypes("org.bytedeco.pytorch.gloo.Context"))
+        ;
+
+        // See https://github.com/pytorch/pytorch/issues/127873
+        infoMap
+            .put(new Info("c10d::AllReduceCommHook", "c10d::FP16CompressCommHook").skip())
+        ;
+
+        infoMap.put(new Info("torch::distributed::rpc::SerializedPyObj::SerializedPyObj").javaText(
+            "  public SerializedPyObj(BytePointer payload, TensorVector tensors) { super((Pointer)null); allocate(payload, tensors); }\n" +
+            "  private native void allocate(@Cast({\"\",\"std::string&&\"}) @StdString BytePointer payload, @ByRef(true) TensorVector tensors);\n" +
+            "  public SerializedPyObj(String payload, TensorVector tensors) { super((Pointer)null); allocate(payload, tensors); }\n" +
+            "  private native void allocate(@Cast({\"\",\"std::string&&\"}) @StdString String payload, @ByRef(true) TensorVector tensors);")
+        ); // Parser doesn't add the @Cast
 
         /* TODO: see how to map these, if needed and meant to be part of API */
         infoMap.put(new Info("c10::MaybeOwnedTraitsGenericImpl >::assignBorrow",
@@ -1854,8 +1949,16 @@ public void map(InfoMap infoMap) {
 
             "torch::autograd::get_current_graph_task_exec_info", // Would need to map GraphTask, NodeExec...too much burden
 
-            "torch::Library::def"
-        ).skip())
+            "torch::Library::def",
+
+            // Could not figure out how to map shared_ptr of std::function
+            "torch::distributed::rpc::RpcAgent::getTypeResolver", "torch::distributed::rpc::RpcAgent::setTypeResolver",
+
+            // The unique constructor takes a std::shared_ptr&&
+            // How to pass a shared_ptr as an r-value with the adapter ?
+            "torch::distributed::autograd::ThreadLocalDistAutogradContext"
+
+            ).skip())
         ;
 
         //// Prevents compiler to croak about "non-standard-layout type".
@@ -1867,6 +1970,12 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
         for (String n : new String[]{
             "c10::DDPLoggingData::strs_map",
             "c10::DDPLoggingData::ints_map",
+            "torch::dynamo::autograd::TensorArgs::inputs",
+            "torch::dynamo::autograd::AutogradCompilerCall::tensor_args",
+            "torch::dynamo::autograd::AutogradCompilerCall::all_size_inputs",
+            "torch::dynamo::autograd::AutogradCompilerCall::dyn_size_inputs",
+            "torch::dynamo::autograd::AutogradCompilerCall::node_calls",
+            "torch::dynamo::autograd::AutogradCompilerCall::default_dyn_type",
             "torch::jit::Object::Property::setter_func",
             "torch::jit::Object::Property::getter_func",
             "torch::jit::Object::Property::name",
@@ -1887,7 +1996,19 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
             "c10::SymbolicShapeMeta::is_channels_last_3d_contiguous_",
             "c10::SymbolicShapeMeta::is_channels_last_",
             "c10::SymbolicShapeMeta::is_channels_last_3d_",
-            "c10::SymbolicShapeMeta::is_non_overlapping_and_dense_"
+            "c10::SymbolicShapeMeta::is_non_overlapping_and_dense_",
+            "c10d::AllreduceOptions::timeout",
+            "c10d::AllreduceOptions::reduceOp",
+            "c10d::AllreduceOptions::sparseIndices",
+            "c10d::C10dLoggingData::strings",
+            "c10d::C10dLoggingData::integers",
+            "c10d::ReduceOptions::timeout",
+            "c10d::ReduceOptions::reduceOp",
+            "c10d::ReduceOptions::rootRank",
+            "c10d::ReduceOptions::rootTensor",
+            "c10d::ReduceScatterOptions::reduceOp",
+            "c10d::ReduceScatterOptions::timeout",
+            "c10d::ReduceScatterOptions::asyncOp"
         }) {
             Info i = infoMap.getFirst(n, false);
             if (i == null) {
@@ -1907,9 +2028,8 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
         //// Pytorch "internal only"
         infoMap.put(new Info(
             "at::RecordFunction::_setAsync", "at::RecordFunction::_setStaticRuntimeOutVariant",
-            "at::Tensor(c10::TensorImpl*)", // Really at::Tensor(c10::intrusive_ptr but the Parser gets the wrong fullname
+            "at::Tensor::Tensor(c10::TensorImpl*)", // "should not be used by end users". Really at::Tensor(c10::intrusive_ptr but the Parser gets the wrong fullname
             "at::Tensor::_set_fw_grad", "at::Tensor::_fw_grad",
-            "at::TensorBase(c10::intrusive_ptr",
             "at::TensorBase::_set_fw_grad", "at::TensorBase::_fw_grad",
             "at::TensorImpl::_set_fw_grad", "at::TensorImpl::_fw_grad",
             "c10::KernelFunction::_equalsBoxedAndUnboxed",
@@ -1924,6 +2044,7 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
             "c10::detail::_str",
             "torch::jit::kJitOnlyOperatorTags",
             "c10::IValue::Tag", // 2.2.0 make IValue::tag public, while IValue::Tag is supposed to be private. Bug ? Check if fixed in next release
+            "c10d::_AllReduceBySumCommHook", //  "Only used internally and not released as a public built-in communication hook."
 
             // Optional args of AOTModelContainerRun.run. Opaque types without apparent use in 2.2.0.
             "AOTInductorStreamOpaque",
@@ -1945,7 +2066,8 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
             "c10::Scalar::isIntegral()",
             "c10::isIntegralType(c10::ScalarType)",
             "at::Tensor::type()",
-            "at::Tensor::is_variable()"
+            "at::Tensor::is_variable()",
+            "c10d::Store::watchKey"
         ).skip());
 
         //// Function returning object by value, and copy constructor was deleted. Any way to get around this ?
@@ -1957,8 +2079,11 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
         ).skip());
 
 
-        //// Deleted operator=. Any way to skip setter only ?
-        infoMap.put(new Info("at::native::RNNDescriptor::dropout_desc_").skip());
+        //// Deleted operator= or related errors. Any way to skip setter only ?
+        infoMap.put(new Info(
+            "at::native::RNNDescriptor::dropout_desc_",
+            "torch::dynamo::autograd::AutogradCompilerCall::hooks"
+        ).skip());
 
 
         //// ifdef'd out
@@ -1979,9 +2104,17 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
             "c10::ArrayRef::equals",
             "c10::ArrayRef::equals",
             "c10::ArrayRef::equals",
-            "c10::ArrayRef >::equals"
+            "c10::ArrayRef >::equals"
         ).skip());
 
+        infoMap
+            .put(new Info("torch::distributed::rpc::worker_id_t").valueTypes("short").pointerTypes("ShortPointer"))
+            .put(new Info("torch::distributed::rpc::local_id_t").valueTypes("long").pointerTypes("LongPointer"))
+        ;
+        infoMap
+            .put(new Info("torch::distributed::rpc::MessageTypeFlags").enumerate(false))
+        ;
+
 
         //// Avoiding name clashes by skipping or renaming
         infoMap.put(new Info("c10::ComplexType::get").javaNames("getComplexTypePtr"))
@@ -2006,6 +2139,9 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
                .put(new Info("torch::jit::Module").pointerTypes("JitModule"))
                .put(new Info("torch::jit::Object").pointerTypes("JitObject"))
                .put(new Info("torch::jit::String").pointerTypes("JitString"))
+               .put(new Info("torch::autograd::Error").pointerTypes("AutogradError")) // Clash with c10::Error or Java Error
+               .put(new Info("c10d::Backend").pointerTypes("DistributedBackend").purify())
+               .put(new Info("torch::dynamo::autograd::TensorArg").pointerTypes("DynamoTensorArg")) // Clash with at::TensorArg
         ;
 
 
@@ -2029,13 +2165,17 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
             .put(new Info("torch::detail::SelectiveStr::operator const char*",
                 "torch::detail::SelectiveStr::operator const char*").
                 javaText("public native @Name(\"operator const char*\") @Cast(\"const char*\") BytePointer asBytePointer();"))// Fixes bug where constexpr prevents addition of const in @Name
-            .put(new Info("c10::weak_intrusive_ptr").pointerTypes("WeakStorage"))
 
             .put(new Info("torch::monitor::Stat").pointerTypes("DoubleStat"))
             .put(new Info("torch::monitor::Stat").pointerTypes("LongStat"))
             .put(new Info("torch::jit::generic_graph_node_list").pointerTypes("graph_node_list"))
             .put(new Info("torch::jit::generic_graph_node_list_iterator").pointerTypes("graph_node_list_iterator"))
             .put(new Info("torch::autograd::Function").pointerTypes("FunctionCrossMapLRN2d"))
+            .put(new Info("c10d::CppCommHookInterface >").pointerTypes("ProcessGroupCppCommHookInterface").purify())
+            .put(new Info("c10::SafePyObjectT").pointerTypes("PyObject_TorchDispatchMode"))
+            .put(new Info("c10::SafePyObjectT::SafePyObjectT(c10::SafePyObjectT&&)").skip()) // As of 2.4.0, this constructor doesn't compile because a std::move is missing in SafePyObject move constructor
+            .put(new Info("c10::LazyValue", "const c10::LazyValue").pointerTypes("Backtrace"))
+            .put(new Info("c10::Backtrace").annotations("@SharedPtr(\"const c10::LazyValue\")"))
         ;
 
         //// Instantiation of function templates.
@@ -2159,6 +2299,29 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
             "c10::CUDAHooksRegistry()").skip());
 
 
+        //// Not mapping all custom pytorch errors since there is currently no way to catch them as objects from Java
+        infoMap.put(new Info(
+                "c10::Error",
+                "c10::ivalue::Future::FutureError",
+                "c10::ThrowEnforceNotMet",
+                "torch::jit::ErrorReport",
+                "c10::DistError",
+                "c10::DistBackendError",
+                "c10::DistStoreError",
+                "c10::DistNetworkError",
+                "c10::EnforceFiniteError",
+                "c10::ErrorAlwaysShowCppStacktrace",
+                "c10::IndexError",
+                "c10::LinAlgError",
+                "c10::NotImplementedError",
+                "c10::OnnxfiBackendSystemError",
+                "c10::OutOfMemoryError",
+                "c10::TypeError",
+                "c10::ValueError"
+            ).skip()
+        );
+
+
         //// Forward references and opaque classes
         infoMap
             .put(new Info("c10::Argument").pointerTypes("Argument")) // Ref in function_schema_inl.h, defined in function_schema.h
@@ -2211,7 +2374,6 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
             "c10::MultiStreamGuard",
             "c10::OpTableOffsetAndMask",
             "c10::OperatorNameView",
-            "c10::OptionalStreamGuard",
             "c10::PyHandleCache",
             "c10::RegisterOperators::Options::KernelRegistrationConfig",
             "c10::Registry,int>",
@@ -2245,6 +2407,7 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
             "c10::basic_string_view::charIsNotEqual_",
             "c10::basic_string_view::stringViewContainsChar_",
             "c10::basic_string_view::stringViewDoesNotContainChar_",
+            "c10::detail::DictImpl",
             "c10::detail::DictKeyEqualTo",
             "c10::detail::DictKeyHash",
             "c10::detail::ListElementFrom",
@@ -2275,7 +2438,7 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
             "c10::hash >::tuple_hash<0> >",
             "c10::hash >::tuple_hash >",
             "c10::impl::AnnotatedSchema",
-            "c10::impl::ListElementConstReferenceTraits >",
+            "c10::impl::ListElementConstReferenceTraits >",
             "c10::impl::SizesAndStrides::",
             "c10::impl::VirtualGuardImpl",
             "c10::impl::decay_if_not_tensor",
@@ -2311,6 +2474,9 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
             "torch::NoInferSchemaTag",
             "torch::all_of",
             "torch::any_of<>",
+            "torch::autograd::CheckpointValidGuard",
+            "torch::autograd::NodeTask",
+            "torch::autograd::ReadyQueue",
             "torch::autograd::CppFunctionSingleTensorPreHook",
             "torch::autograd::CppFunctionTensorPreHook",
             "torch::autograd::GraphTask",
@@ -2377,9 +2543,12 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
         //// TORCH_API and the like are not honored on Linux but are on Windows. We must skip all public
         //// functions not marked as part of API.
         infoMap.put(new Info(
+            "at::TensorBase::TensorBase(c10::intrusive_ptr)", // "should not be used by end users"
             "at::TensorIteratorBase::apply_perm_and_mul",
             "at::assert_no_partial_overlap(c10::TensorImpl*, c10::TensorImpl*)",
             "at::impl::VariableHooksInterface::_register_hook",
+            "at::native::construct_nested_strides", // Not exported
+            "at::native::construct_offsets", // Not exported
             "at::native::get_numel_from_nested_size_tensor",
             "at::operator <<(std::ostream&, at::Range&)",
             "c10::cuda::CUDACachingAllocator::format_size",
@@ -2391,15 +2560,24 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
             "c10::ivalue::EnumHolder::operator ==", // The friend operator is truly a member of c10::ivalue and not c10::ivalue::EnumHolder
             "c10::ivalue::EnumHolder::unqualifiedClassName",
             "c10::ivalue::Future::operator <<",
+            "c10::merge_primitive", // templated function with some specializations. Will have to figure what instances to create if needed.
             "c10::operator <<(std::ostream&, c10::SourceLocation&)",
+            "c10d::checkForNan", // Not exported
+            "c10d::Logger::operator <<(std::ostream&, const c10d::Logger&)", // No definition
+            "c10d::ProcessGroupGloo::createProcessGroupGloo", // No definition
             "caffe2::serialize::detail::getPadding",
             "torch::autograd::add_node_to_current_graph_task_exec_info",
+            "torch::autograd::set_device(int)",
             "torch::detail::constructSchemaOrName",
+            "torch::distributed::rpc::Message::isShutdown", // No definition
+            "torch::distributed::rpc::Message:isShutdown", // No definition
+            "torch::distributed::rpc::getAllowJitRRefPickle",
+            "torch::distributed::rpm::getAllowJitRRefPickle",
             "torch::jit::ClassDef::create",
             "torch::jit::Code::operator <<(std::ostream&, const torch::jit::Code&)", // The friend operator is truly a member of torch::jit and not torch::jit::Code
+            "torch::jit::Object::Object(c10::QualifiedName, std::shared_ptr, bool)", // No definition
             "torch::profiler::impl::getNvtxStr",
-            "torch::profiler::impl::shapeToStr",
-            "c10::merge_primitive" // templated function with some specializations. Will have to figure what instances to create if needed.
+            "torch::profiler::impl::shapeToStr"
         ).skip());
 
         //// Aliases necessary because of Parser limited namespace resolution
@@ -2410,7 +2588,7 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
         //// Classes kept but passed as generic pointer
                .put(new Info("c10::intrusive_ptr_target", "c10::nullopt", "c10::nullopt_t", "c10::impl::PyObjectSlot",
                    "_object",
-                   "PyObject", "THPObjectPtr", "pyobj_list", "std::chrono::milliseconds", "std::exception_ptr", "std::type_info",
+                   "PyObject", "THPObjectPtr", "pyobj_list", "std::exception_ptr", "std::type_info",
                    "std::pair", "std::stack >", "torch::autograd::utils::DelayWarningHandler",
                    "std::is_same,torch::detail::pack >", "at::cuda::NVRTC", "at::RecordFunctionCallback", "at::StepCallbacks", "THCState", "THHState",
                    "torch::jit::InlinedCallStackPtr", "InlinedCallStackPtr", "torch::jit::ScopePtr", "torch::jit::BackendDebugInfoRecorder",
@@ -2418,16 +2596,16 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
                    "std::shared_ptr", "caffe2::serialize::PyTorchStreamWriter",
                    "c10::detail::DictImpl::dict_map_type::iterator",
                    "std::iterator >",
-                   "c10::optional", "c10::optional",
+                   "std::optional",
                    "c10::intrusive_ptr", "c10::intrusive_ptr",
                    "c10::ArrayRef >",
-                   "torch::jit::DetachedBuffer::UniqueDetachedBuffer", "c10::optional",
-                   "c10::optional::ListOfOptionalElements>", "c10::optional::ListOfOptionalElements>",
-                   "c10::optional >",
-                   "c10::optional",
-                   "c10::optional",
-                   "std::tuple >,c10::optional >,c10::optional >",
-                   "c10::optional >", "c10::optional >",
+                   "torch::jit::DetachedBuffer::UniqueDetachedBuffer", "std::optional",
+                   "std::optional::ListOfOptionalElements>", "std::optional::ListOfOptionalElements>",
+                   "std::optional >",
+                   "std::optional",
+                   "std::optional",
+                   "std::tuple >,std::optional >,std::optional >",
+                   "std::optional >", "std::optional >",
                    "std::vector >", "std::reference_wrapper",
                    "std::enable_shared_from_this",
                    "std::enable_shared_from_this",
@@ -2487,8 +2665,9 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
                 "caffe2::TypeMeta::Delete", "std::function").pointerTypes("PointerConsumer").valueTypes("PointerConsumer").skip())
             .put(new Info("void* (*)()", "caffe2::TypeMeta::New").pointerTypes("PointerSupplier").valueTypes("PointerSupplier").skip())
             .put(new Info("std::function").pointerTypes("Func"))
-            .put(new Info("std::function").pointerTypes("StringSupplier"))
+            .put(new Info("std::function", "std::function").pointerTypes("StringSupplier"))
             .put(new Info("std::function").pointerTypes("StringConsumer"))
+            .put(new Info("std::function").pointerTypes("StringMapper"))
             .put(new Info("std::function",
                 "std::function").pointerTypes("DDPLogger"))
             .put(new Info("std::function").pointerTypes("TypeMapper"))
@@ -2531,24 +2710,23 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
                 "c10::TypePtr (*)(const std::string&)",
                 "c10::Type::SingletonOrSharedTypePtr (*)(const std::string&)"
             ).pointerTypes("TypeParser").skip())
-            .put(new Info("std::function(const c10::Type&)>").pointerTypes("TypePrinter"))
+            .put(new Info("std::function(const c10::Type&)>").pointerTypes("TypePrinter"))
             .put(new Info("void (*)(void*, size_t)", "c10::PlacementDtor", "caffe2::TypeMeta::PlacementNew", "caffe2::TypeMeta::PlacementDelete").pointerTypes("PlacementConsumer").valueTypes("PlacementConsumer").skip())
             .put(new Info("void (*)(const void*, void*, size_t)", "caffe2::TypeMeta::Copy").pointerTypes("PlacementCopier").valueTypes("PlacementCopier").skip())
             .put(new Info("torch::jit::Operation (*)(const torch::jit::Node*)", "torch::jit::OperationCreator").pointerTypes("OperationCreator").valueTypes("OperationCreator").skip())
             .put(new Info("c10::ApproximateClockToUnixTimeConverter::makeConverter").skip()) // Function returning a std::function
             .put(new Info("std::function(const at::StrongTypePtr&,c10::IValue)>", "torch::jit::ObjLoader").pointerTypes("ObjLoader"))
+            .put(new Info("std::function)>", "std::function", "torch::distributed::autograd::DistAutogradContext::GradCallback").pointerTypes("GradCallback"))
+            .put(new Info("std::function >()>", "std::function").pointerTypes("StackTraceFetcher"))
 
             //// std::function passed as generic pointer because are returned by some methods.
             .put(new Info("std::function", "torch::jit::BackendMetaPtr", "std::function&)>")
                 .pointerTypes("Pointer").cast())
-
-
         ;
 
         infoMap.put(new Info("caffe2::TypeMeta::deleteFn").javaText("public native @NoException(true) PointerConsumer deleteFn();")); // Parser picks up the wrong Delete
 
-        infoMap.put(new Info("c10::VaryingShape::merge").skip()); // https://github.com/pytorch/pytorch/issues/123248, waiting for the fix in 2.3.1 or 2.4
-
         //// Different C++ API between platforms
         // This will produce different Java codes, but as long as the differences only concern
         // JavaCPP annotations, we don't care.
@@ -2564,7 +2742,7 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
         }
     }
 
-    private static String template(String t, String... args) {
+    static String template(String t, String... args) {
         StringBuilder sb = new StringBuilder(t);
         sb.append('<');
         for (int i = 0; i < args.length; i++) {
@@ -2737,21 +2915,21 @@ void mapList(InfoMap infoMap) {
                        template("c10::impl::ListIterator", t, "c10::detail::ListImpl::list_type::iterator"))
                        .pointerTypes(baseJavaName + "ListIterator"))
                    .put(new Info(template("c10::List", t) + "::value_type").valueTypes(elementValueType))
-                   .put(new Info(template("operator std::conditional_t", template("std::is_reference", template("c10::detail::ivalue_to_const_ref_overload_return", t) + "::type") + "::value", "const " + t + "&", t) + "()")
+                   .put(new Info(template("operator std::conditional_t", template("std::is_reference_v", template("c10::detail::ivalue_to_const_ref_overload_return", t) + "::type"), "const " + t + "&", t) + "()")
                        .javaNames("get" + baseJavaName))
                    .put(new Info(template("c10::List", t) + "::size_type").valueTypes("long"))
                    .put(new Info(template("c10::impl::ListElementReference", t, "c10::detail::ListImpl::list_type::iterator") + "::" + template("swap", t, "c10::detail::ListImpl::list_type::iterator"))
                        .javaNames("swap").friendly())
                    .put(new Info(template("c10::List", t) + "::get(" + template("c10::List", t) + "::size_type)").javaText("public native " + elementValueType +" get(long pos);"))
             ;
+            Info listElementRefInfo = new Info(template("std::conditional_t", template("std::is_reference_v", template("c10::detail::ivalue_to_const_ref_overload_return", t) + "::type"), "const " + t + "&", t));
+            listElementRefInfo.pointerTypes(itPointerType).valueTypes(elementValueType);
             infoMap.put(new Info(template("c10::List", t) + "::operator []").skip()) // Returns an internal_reference_type by value, which is a ListElementReference, whose copy constructor is disabled.
                    .put(new Info(
                        template("c10::impl::ListIterator", t, "c10::detail::ListImpl::list_type::iterator") + "::operator []",
                        template("c10::impl::ListIterator", t, "c10::detail::ListImpl::list_type::iterator") + "::operator *")
                        .skip()) // Returns ListElementReference by value, and ListElementReference has copy constructor disabled.
-                   .put(new Info(template("std::conditional_t", template("std::is_reference", template("c10::detail::ivalue_to_const_ref_overload_return", t) + "::type") + "::value", "const " + t + "&", t))
-                       .pointerTypes(itPointerType).valueTypes(elementValueType))
-
+                   .put(listElementRefInfo)
                    .put(new Info(template("c10::impl::swap", t, "typename c10::detail::ListImpl::list_type::iterator")).javaNames("swap"))
             ;
 
@@ -2801,14 +2979,21 @@ PointerInfo virtualize() {
 
         void makeShared(InfoMap infoMap) {
             // See issue #670
-            String[] cppNames = new String[argumentNames.length + otherCppNames.length];
+            String[] cppNamesStrong = new String[argumentNames.length + otherCppNames.length];
+            String[] cppNamesWeak = new String[argumentNames.length];
             int i = 0;
-            for (String n : argumentNames) cppNames[i++] = template("std::shared_ptr", n);
-            for (String n : otherCppNames) cppNames[i++] = n;
+            int j = 0;
+            for (String n : argumentNames) {
+                cppNamesStrong[i++] = template("std::shared_ptr", n);
+                cppNamesWeak[j++] = template("std::weak_ptr", n);
+            }
+            for (String n : otherCppNames) cppNamesStrong[i++] = n;
             // Specifying the parameter of the annotation allows to disambiguate cases where a class can store either a
             // std::shared_ptr or std::shared_ptr (like CompilationUnit)
             // .valueTypes("@Cast(\"const torch::jit::CompilationUnit*\") CompilationUnit") seems to work too but for obscure reason
-            infoMap.put(new Info(cppNames).annotations("@SharedPtr(\"" + argumentNames[0] + "\")").pointerTypes(javaBaseName));
+            infoMap.put(new Info(cppNamesStrong).annotations("@SharedPtr(\"" + argumentNames[0] + "\")").pointerTypes(javaBaseName));
+            infoMap.put(new Info(cppNamesWeak).annotations("@WeakPtr(\"" + argumentNames[0] + "\")").pointerTypes(javaBaseName));
+
 
             // Also annotate constructor of target class to ensure only one shared_ptr exists for each instance
             String n = argumentNames[0].substring(argumentNames[0].lastIndexOf(' ') + 1); // Remove possible const
@@ -2824,6 +3009,32 @@ void makeShared(InfoMap infoMap) {
             infoMap.put(new Info(n + n.substring(n.lastIndexOf("::"))).annotations("@SharedPtr", "@Name(\"std::make_shared<" + n2 + ">\")"));
         }
 
+        void makeIntrusive(InfoMap infoMap) {
+            // See issue #670
+            String[] cppNames = new String[argumentNames.length*2 + otherCppNames.length];
+            int i = 0;
+            for (String n : argumentNames) {
+                cppNames[i++] = template("c10::intrusive_ptr", n);
+                cppNames[i++] = template("c10::weak_intrusive_ptr", n);
+            }
+            for (String n : otherCppNames) cppNames[i++] = n;
+            // Specifying the parameter of the annotation allows to disambiguate cases where a class can store either a
+            // std::shared_ptr or std::shared_ptr (like CompilationUnit)
+            // .valueTypes("@Cast(\"const torch::jit::CompilationUnit*\") CompilationUnit") seems to work too but for obscure reason
+            Info info = new Info(cppNames).annotations("@IntrusivePtr(\"" + argumentNames[0] + "\")").pointerTypes(javaBaseName);
+            info.valueTypes("@Cast({\"\", \"" + cppNames[0] + "&\"}) " + javaBaseName); // Disambiguate between & and * cast operator for IValue constructors and others
+            infoMap.put(info);
+
+            // Also annotate constructor of target class to ensure only one shared_ptr exists for each instance
+            String n = argumentNames[0].substring(argumentNames[0].lastIndexOf(' ') + 1); // Remove possible const
+            String n2 = n;
+            if (virtualize) {
+                n2 = mangle(n2);
+                infoMap.put(new Info(n).virtualize());
+            }
+            infoMap.put(new Info(n + n.substring(n.lastIndexOf("::"))).annotations("@IntrusivePtr", "@Name(\"c10::make_intrusive<" + n2 + ">\")"));
+        }
+
         void makeUnique(InfoMap infoMap) {
             // The default info in infoMap is not enough for classes that are elements for containers like vector>
             String[] cppNames = new String[argumentNames.length + otherCppNames.length];
diff --git a/pytorch/src/main/java/org/bytedeco/pytorch/presets/torch_cuda.java b/pytorch/src/main/java/org/bytedeco/pytorch/presets/torch_cuda.java
index c5e42eae804..7fea5a0c6ca 100644
--- a/pytorch/src/main/java/org/bytedeco/pytorch/presets/torch_cuda.java
+++ b/pytorch/src/main/java/org/bytedeco/pytorch/presets/torch_cuda.java
@@ -21,38 +21,51 @@
  */
 package org.bytedeco.pytorch.presets;
 
+import org.bytedeco.cuda.presets.cudnn;
+import org.bytedeco.cuda.presets.cupti;
+import org.bytedeco.cuda.presets.cusolver;
+import org.bytedeco.cuda.presets.cusparse;
 import org.bytedeco.javacpp.ClassProperties;
 import org.bytedeco.javacpp.LoadEnabled;
 import org.bytedeco.javacpp.annotation.*;
 import org.bytedeco.javacpp.tools.Info;
 import org.bytedeco.javacpp.tools.InfoMap;
 import org.bytedeco.javacpp.tools.InfoMapper;
+import org.bytedeco.pytorch.presets.torch.PointerInfo;
 
 /**
  * @author Hervé Guillemet
  */
 @Properties(
-    inherit = torch.class,
+    inherit = { torch.class, cudnn.class, cusparse.class, cusolver.class, cupti.class },
     value = {
         @Platform(
             extension = "-gpu",
+            // define = "USE_C10D_NCCL", // Not on Windows
             include = {
-                "ATen/cudnn/Descriptors.h",
                 "ATen/cudnn/Types.h",
-                "c10/cuda/CUDAGuard.h",
+                "ATen/cudnn/Descriptors.h",
+                "ATen/cuda/CUDAEvent.h",
                 "torch/csrc/inductor/aoti_runner/model_container_runner_cuda.h",
 
                 // For inclusion in JNI only, not parsed
                 "ATen/cuda/CUDAGeneratorImpl.h",
             },
-            link = { "cudart", "cusparse", "cudnn" },
-            linkpath = {
-                "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.3/lib/x64/",
-                "/usr/local/cuda-12.3/lib64/",
-                "/usr/local/cuda/lib64/",
-                "/usr/lib64/"
+            exclude = {
+                "", // pytorch includes cublas_v2, which is not compatible with cublas included from inherited cudnn presets
+                "" // causes #warning
             }
         ),
+        @Platform(
+            value = "linux",
+            extension = "-gpu",
+            link = { "c10", "torch" , "c10_cuda", "torch_cuda", "torch_cuda_linalg" } // cuda_linalg built as separate lib on linux only
+        ),
+        @Platform(
+            value = "windows",
+            extension = "-gpu",
+            link = { "c10", "torch" , "c10_cuda", "torch_cuda" }
+        )
     },
     target = "org.bytedeco.pytorch.cuda",
     global = "org.bytedeco.pytorch.global.torch_cuda"
@@ -72,15 +85,7 @@ public void map(InfoMap infoMap) {
         torch.sharedMap(infoMap);
 
         infoMap
-            .put(new Info("basic/containers").cppTypes("c10::optional"))
-
             .put(new Info().enumerate().friendly())
-            .put(new Info().javaText("import org.bytedeco.pytorch.*;"))
-            .put(new Info().javaText("import org.bytedeco.pytorch.cuda.functions.*;"))
-            .put(new Info().javaText("import org.bytedeco.pytorch.Error;"))
-            .put(new Info().javaText("import org.bytedeco.pytorch.global.torch.DeviceType;"))
-            .put(new Info().javaText("import org.bytedeco.pytorch.global.torch.ScalarType;"))
-            .put(new Info().javaText("import org.bytedeco.pytorch.global.torch.MemoryFormat;"))
             .put(new Info().javaText("import org.bytedeco.pytorch.Allocator;"))
 
             .put(new Info().javaText(
@@ -92,6 +97,9 @@ public void map(InfoMap infoMap) {
                 "at::CUDAGeneratorImpl"
             ).skip())
 
+            //// std::unordered_map
+            ////.put(new Info("std::unordered_map >").pointerTypes("StringNCCLCommMap").define())
+            //.put(new Info("std::unordered_map >").skip()) // See getNcclErrorDetailStr below. Not on Windows
 
             //// std::unordered_set
             .put(new Info("std::unordered_set").pointerTypes("PointerSet").define())
@@ -107,15 +115,28 @@ public void map(InfoMap infoMap) {
 
             //// std::array
             .put(new Info("std::array", "c10::cuda::CUDACachingAllocator::StatArray").cast().pointerTypes("Stat"))
+            .put(new Info("std::array").cast().pointerTypes("PointerPointer"))
+        ;
+
+        //// Intrusive pointers
+        /* Not on Windows
+        for (PointerInfo pi : new PointerInfo[]{
+            new PointerInfo("c10d::ProcessGroupNCCL::Options"),
+            new PointerInfo("c10d::intra_node_comm::IntraNodeComm")
+        }) {
+            pi.makeIntrusive(infoMap);
+        }
+         */
 
-            //// Function pointers
+        //// Function pointers
+        infoMap
             .put(new Info("std::function").pointerTypes("AllocatorTraceTracker"))
             .put(new Info("std::function").pointerTypes("OutOfMemoryObserver"))
             .put(new Info("std::function").pointerTypes("StreamFilter"))
 
-            // Function pointer returning shared_ptr don't compile on windows
-            // "D:\a\javacpp-presets\javacpp-presets\pytorch\target\native\org\bytedeco\pytorch\windows-x86_64\jnitorch.cpp(98904): error C2526: 'JavaCPP_org_bytedeco_pytorch_functions_GatheredContextSupplier_allocate_callback': C linkage function cannot return C++ class 'std::shared_ptr'"
-            //.put(new Info("std::shared_ptr (*)()", "c10::cuda::CUDACachingAllocator::CreateContextFn").pointerTypes("GatheredContextSupplier").valueTypes("GatheredContextSupplier").skip())
+        // Function pointer returning shared_ptr don't compile on windows
+        // "jnitorch.cpp(98904): error C2526: 'JavaCPP_org_bytedeco_pytorch_functions_GatheredContextSupplier_allocate_callback': C linkage function cannot return C++ class 'std::shared_ptr'"
+        //.put(new Info("std::shared_ptr (*)()", "c10::cuda::CUDACachingAllocator::CreateContextFn").pointerTypes("GatheredContextSupplier").valueTypes("GatheredContextSupplier").skip())
         ;
 
         //// Avoiding name clashes by skipping or renaming
@@ -131,31 +152,36 @@ public void map(InfoMap infoMap) {
             infoMap.put(new Info("c10::cuda::CUDACachingAllocator::" + s).skip());
         }
 
-        //// Already defined in main torch
+        //// Parsed in main torch
+        // We need to help namespace resolution and to redefine names of template instances.
         infoMap
-            .put(new Info("c10::Stream").pointerTypes("Stream"))
-            .put(new Info("c10::optional").pointerTypes("StreamOptional"))
-            .put(new Info("c10::optional").pointerTypes("DeviceOptional"))
-            .put(new Info("c10::Device").pointerTypes("Device"))
-            .put(new Info("c10::impl::PyInterpreter").pointerTypes("PyInterpreter"))
+            .put(new Info("c10::Stream"))
+            .put(new Info("std::optional").pointerTypes("StreamOptional"))
+            .put(new Info("std::optional", "std::optional", "optional").pointerTypes("DeviceOptional"))
+            .put(new Info("c10::Device"))
+            .put(new Info("c10::impl::PyInterpreter"))
             .put(new Info("std::tuple").pointerTypes("T_IntInt_T"))
-            .put(new Info("c10::optional").pointerTypes("ByteOptional"))
+            .put(new Info("std::optional").pointerTypes("ByteOptional"))
             .put(new Info("c10::IntArrayRef", "at::IntArrayRef").pointerTypes("LongArrayRef"))
             .put(new Info("std::vector").pointerTypes("DataPtrVector"))
-            .put(new Info("c10::Allocator").pointerTypes("Allocator"))
+            .put(new Info("c10::Allocator"))
+            .put(new Info("c10d::Work"))
+            .put(new Info("c10d::Store", "c10d::ScatterOptions", "c10d::ReduceScatterOptions", "c10d::AllToAllOptions", "c10d::BarrierOptions", "c10d::AllreduceCoalescedOptions"))
+            .put(new Info("c10d::BroadcastOptions", "c10d::ReduceOptions", "c10d::AllreduceOptions", "c10d::AllgatherOptions", "c10d::GatherOptions"))
             .put(new Info("CUDAContextLight.h").linePatterns("struct Allocator;").skip()) // Prevent regeneration of Allocator class in cuda package
+            .put(new Info("c10d::Backend::Options").pointerTypes("DistributedBackend.Options"))
 
-            .put(new Info("c10::DeviceIndex").valueTypes("byte").pointerTypes("BytePointer", "ByteBuffer", "byte[]"))
+            .put(new Info("c10::DeviceIndex", "at::DeviceIndex").valueTypes("byte").pointerTypes("BytePointer", "ByteBuffer", "byte[]"))
             .put(new Info("c10::StreamId").valueTypes("long"))
             .put(new Info("c10::cuda::CaptureStatus").valueTypes("int").cast().skip()) // Enum doesn't parse
             .put(new Info("std::pair,std::vector >").pointerTypes("DeviceAssertionsDataVectorCUDAKernelLaunchInfoVectorPair").define())
-            .put(new Info("c10::CuDNNError", "c10::CUDAError").purify())
             .put(new Info("c10::impl::GPUTrace::gpuTraceState").skip())
             .put(new Info("at::native::RNNDescriptor::dropout_desc_").skip())
             .put(new Info("at::native::operator <<(std::ostream&, at::native::TensorDescriptor&)",
                 "at::native::operator <<(std::ostream&, at::native::FilterDescriptor&)",
                 "at::native::cudnnTypeToString", "at::native::getCudnnDataType", "at::native::cudnn_version",
                 "c10::cuda::c10_retrieve_device_side_assertion_info").skip())
+            .put(new Info("std::function)>", "std::function", "std::function").pointerTypes("WorkInfoConsumer"))
 
             .put(new Info("c10::cuda::CUDACachingAllocator::CheckpointDelta").immutable()) // at::DataPtr is not constructible
 
@@ -176,42 +202,80 @@ public void map(InfoMap infoMap) {
 
                 "std::shared_ptr (*)()", "c10::cuda::CUDACachingAllocator::CreateContextFn"  // See comment for GatheredContextSupplier
 
-            ).cast().pointerTypes("Pointer"))
-
-            //// CUDA types
-            .put(new Info( // Struct
-                "cudaDeviceProp"
-            ).pointerTypes("Pointer"))
-            .put(new Info( // Pointers to opaque structs
-                "cudaStream_t", "cusparseHandle_t", "cublasHandle_t", "cusolverDnHandle_t", "cudnnHandle_t", "cudaEvent_t",
-                "cublasLtHandle_t"
-            ).valueTypes("Pointer").cast())
-            .put(new Info( // Enums
+                // "std::enable_shared_from_this" // Not on Windows
+
+            ).cast().pointerTypes("Pointer"));
+        new PointerInfo("c10d::Store").makeIntrusive(infoMap);
+        new PointerInfo("c10d::Work").makeIntrusive(infoMap);
+
+
+        //// CUDA types
+        infoMap
+            .put(new Info( // Enums, cuda presets doesn't use Info.enumerate
                 "cudnnActivationMode_t", "cudnnLossNormalizationMode_t", "cudnnRNNInputMode_t", "cudnnRNNDataLayout_t",
                 "cudnnDirectionMode_t", "cudnnRNNMode_t", "cudaStreamCaptureMode", "cudnnDataType_t", "cudnnNanPropagation_t",
                 "cusparseStatus_t", "cusolverStatus_t", "cudnnRNNAlgo_t", "cudnnNanPropagation_t", "cublasStatus_t", "cudaError_t",
-                "cudaMemcpyKind"
+                "cudaMemcpyKind", "ncclResult_t", "ncclDataType_t", "ncclRedOp_t", "ncclScalarResidence_t"
             ).valueTypes("int").cast())
         ;
 
         new torch.ArrayInfo("CUDAStream").elementTypes("c10::cuda::CUDAStream").mapArrayRef(infoMap);
 
-        new torch.PointerInfo("c10::cuda::CUDACachingAllocator::AllocatorState").makeShared(infoMap);
+        new PointerInfo("c10::cuda::CUDACachingAllocator::AllocatorState").makeShared(infoMap);
+        //new PointerInfo("c10d::NCCLComm").makeShared(infoMap); // See getNcclErrorDetailStr below
 
         // Classes that are not part of the API (no TORCH_API nor C10_API) and are not argument nor return type of API methods.
         infoMap.put(new Info(
             "c10::cuda::OptionalCUDAGuard",
             "c10::cuda::OptionalCUDAStreamGuard",
             "c10::cuda::impl::CUDAGuardImpl",
-            "c10::FreeMemoryCallback" // in API, but useless as long as we don't map FreeCudaMemoryCallbacksRegistry,
+            "c10::FreeMemoryCallback", // in API, but useless as long as we don't map FreeCudaMemoryCallbacksRegistry,
+            "AT_DISALLOW_COPY_AND_ASSIGN",
+            "c10d::NCCLComm", "std::shared_ptr" // See getNcclErrorDetailStr below
         ).skip())
         ;
 
-        infoMap.put(new Info("USE_CUDNN_RNN_V8_API").define()); // Using CuDNN 8.9.7 or more recent
+        // compile-time only constexpr
+        infoMap.put(new Info("c10::cuda::max_compile_time_stream_priorities").skip());
+
+        infoMap
+            .put(new Info("USE_CUDNN_RNN_V8_API").define()) // Using CuDNN 8.9.7 or more recent
+            .put(new Info("defined(IS_NCCL_EXP) && defined(NCCL_COMM_DUMP)").define(false))
+        ;
 
         //// Different C++ API between platforms
         infoMap
             .put(new Info("at::cuda::getCurrentCUDABlasLtHandle").skip()) // No cublas lt with Microsoft compiler
         ;
+
+        //// Don't map all custom pytorch errors since there is currently no way to catch them as objects from Java
+        infoMap.put(new Info(
+            "c10::CUDAError",
+            "c10::CuDNNError"
+        ).skip());
+
+        //// Not part of public API or not exposed by libtorch
+        infoMap
+            .put(new Info(
+                "c10d::DumpPipe",
+                "c10d::nccl_use_nonblocking",
+                "c10d::getNcclErrorDetailStr", // Prevents c10d::NCCLComm to be mapped
+                "c10d::ncclGetErrorWithVersion",
+                "c10d::nccl_nonblocking_timeout",
+                "c10d::getNcclVersion",
+                "c10d::ProcessGroupNCCL::operator <<"
+                ).skip())
+
+        ;
+
+        //// Help namespace resolution
+        infoMap
+            .put(new Info("std::optional", "c10d::WorkInfo"))
+        ;
+
+        //// No way to map
+        infoMap
+            .put(new Info("std::optional >").skip())
+        ;
     }
 }
diff --git a/pytorch/src/main/java9/module-info.java b/pytorch/src/main/java9/module-info.java
index 933f01a8cbe..eb1b77fa149 100644
--- a/pytorch/src/main/java9/module-info.java
+++ b/pytorch/src/main/java9/module-info.java
@@ -3,7 +3,7 @@
   requires transitive org.bytedeco.openblas;
   exports org.bytedeco.pytorch.global;
   exports org.bytedeco.pytorch.presets;
-  exports org.bytedeco.pytorch.functions;
   exports org.bytedeco.pytorch.cuda;
+  exports org.bytedeco.pytorch.gloo;
   exports org.bytedeco.pytorch;
 }
diff --git a/pytorch/src/main/resources/org/bytedeco/pytorch/include/datasets.h b/pytorch/src/main/resources/org/bytedeco/pytorch/include/datasets.h
index f26b8630588..cee41b00521 100644
--- a/pytorch/src/main/resources/org/bytedeco/pytorch/include/datasets.h
+++ b/pytorch/src/main/resources/org/bytedeco/pytorch/include/datasets.h
@@ -15,7 +15,7 @@ namespace javacpp {
  struct Dataset : public torch::data::datasets::Dataset, torch::data::Example> {
    virtual ~Dataset() = default;
    virtual torch::data::Example get(size_t index) override = 0;
-   virtual c10::optional size() const override = 0;
+   virtual std::optional size() const override = 0;
    virtual std::vector> get_batch(c10::ArrayRef indices) override {
      return torch::data::datasets::Dataset, torch::data::Example>::get_batch(indices);
    };
@@ -27,7 +27,7 @@ namespace javacpp {
 template 
 struct StreamDataset : public torch::data::datasets::BatchDataset, std::vector>, size_t> {
     virtual ~StreamDataset() = default;
-    virtual c10::optional size() const override = 0;
+    virtual std::optional size() const override = 0;
     virtual std::vector> get_batch(size_t size) override = 0;
 };
 
@@ -37,8 +37,8 @@ struct StreamDataset : public torch::data::datasets::BatchDataset
 struct StatefulDataset : public torch::data::datasets::StatefulDataset, std::vector>, size_t> {
   virtual ~StatefulDataset() = default;
-  virtual c10::optional size() const override = 0;
-  virtual c10::optional>> get_batch(size_t size) override = 0;
+  virtual std::optional size() const override = 0;
+  virtual std::optional>> get_batch(size_t size) override = 0;
   virtual void reset() override = 0;
   virtual void save(torch::serialize::OutputArchive& archive) const override = 0;
   virtual void load(torch::serialize::InputArchive& archive) override = 0;
diff --git a/pytorch/src/main/resources/org/bytedeco/pytorch/include/pytorch_adapters.h b/pytorch/src/main/resources/org/bytedeco/pytorch/include/pytorch_adapters.h
index ab69f0072d7..712205297ba 100644
--- a/pytorch/src/main/resources/org/bytedeco/pytorch/include/pytorch_adapters.h
+++ b/pytorch/src/main/resources/org/bytedeco/pytorch/include/pytorch_adapters.h
@@ -24,4 +24,80 @@ class JavaCPP_hidden StringViewAdapter final {
         c10::string_view sv;
         c10::string_view &svRef;
         void *owner = NULL;
+};
+
+template> class IntrusivePtrAdapter {
+public:
+    typedef c10::intrusive_ptr I;
+    IntrusivePtrAdapter(const T* ptr, size_t size, void* owner) : ptr((T*)ptr), size(size), owner(owner),
+            intrusivePtr2(owner != NULL && owner != ptr ? *(I*)owner : I::reclaim((T*)ptr)), intrusivePtr(intrusivePtr2) { }
+    IntrusivePtrAdapter(const I& intrusivePtr) : ptr(0), size(0), owner(0), intrusivePtr2(intrusivePtr), intrusivePtr(intrusivePtr2) { }
+    IntrusivePtrAdapter(      I& intrusivePtr) : ptr(0), size(0), owner(0), intrusivePtr(intrusivePtr) { }
+    IntrusivePtrAdapter(const I* intrusivePtr) : ptr(0), size(0), owner(0), intrusivePtr(*(I*)intrusivePtr) { }
+    IntrusivePtrAdapter(c10::weak_intrusive_ptr wp) : ptr(0), size(0), owner(0), intrusivePtr2(wp.lock()), intrusivePtr(intrusivePtr2) { }
+
+    void assign(T* ptr, size_t size, void* owner) {
+        this->ptr = ptr;
+        this->size = size;
+        this->owner = owner;
+        this->intrusivePtr = owner != NULL && owner != ptr ? *(I*)owner : I((T*)ptr);
+    }
+    static void deallocate(void* owner) { delete (I*)owner; }
+
+    operator T*() {
+        if (ptr == NULL) ptr = intrusivePtr.get();
+        return ptr;
+    }
+    operator T&() {
+        if (ptr == NULL) ptr = intrusivePtr.get();
+        return *ptr;
+    }
+    /* Necessary because, without it, assigning an adapter to an optional will
+     * pick up the T*() conversion operator which will make the type checking
+     * in optional fail for some reason. */
+    operator std::optional() {
+        return std::optional(intrusivePtr);
+    }
+
+    operator I&() { return intrusivePtr; }
+    operator I*() { return &intrusivePtr; }
+    T* ptr;
+    size_t size;
+    void* owner;
+    I intrusivePtr2;
+    I& intrusivePtr;
+};
+
+template class WeakPtrAdapter {
+public:
+    typedef std::shared_ptr S;
+    typedef std::weak_ptr W;
+    WeakPtrAdapter(const T* ptr, size_t size, void* owner) : ptr((T*)ptr), size(size), owner(owner),
+            sharedPtr2(owner != NULL && owner != ptr ? *(S*)owner : S((T*)ptr)), sharedPtr(sharedPtr2) { }
+    WeakPtrAdapter(const W& weakPtr) : ptr(0), size(0), owner(0), sharedPtr2(weakPtr.lock()), sharedPtr(sharedPtr2) { }
+    WeakPtrAdapter(      W& weakPtr) : ptr(0), size(0), owner(0), sharedPtr2(weakPtr.lock()), sharedPtr(sharedPtr2) { }
+    WeakPtrAdapter(const W* weakPtr) : ptr(0), size(0), owner(0), sharedPtr2((*weakPtr).lock()), sharedPtr(sharedPtr2) { }
+
+    void assign(T* ptr, size_t size, void* owner) {
+        this->ptr = ptr;
+        this->size = size;
+        this->owner = owner;
+        this->sharedPtr = owner != NULL && owner != ptr ? *(S*)owner : S((T*)ptr);
+    }
+    static void deallocate(void* owner) { delete (S*)owner; }
+
+    operator typename std::remove_const::type*() {
+        ptr = sharedPtr.get();
+        if (owner == NULL || owner == ptr) {
+          owner = new S(sharedPtr);;
+        }
+        return (typename std::remove_const::type*)ptr;;
+    }
+
+    operator W() { return W(sharedPtr); }
+    T* ptr;
+    size_t size;
+    void* owner;
+    S sharedPtr2;
+    S& sharedPtr;
 };
\ No newline at end of file
diff --git a/pytorch/src/main/resources/org/bytedeco/pytorch/presets/gloo_include.h b/pytorch/src/main/resources/org/bytedeco/pytorch/presets/gloo_include.h
new file mode 100644
index 00000000000..adbae60f042
--- /dev/null
+++ b/pytorch/src/main/resources/org/bytedeco/pytorch/presets/gloo_include.h
@@ -0,0 +1,16 @@
+#include "gloo/common/string.h"
+//#include "gloo/common/logging.h"
+#include "gloo/transport/address.h"
+#include "gloo/transport/buffer.h"
+#include "gloo/transport/unbound_buffer.h"
+#include "gloo/transport/pair.h"
+//#include "gloo/context.h"
+#include "gloo/common/common.h"
+#include "gloo/types.h"
+#include "gloo/math.h"
+#include "gloo/algorithm.h"
+// #include "gloo/common/error.h"
+#include "gloo/common/store.h"
+#include "gloo/rendezvous/store.h"
+#include "gloo/transport/context.h"
+#include "gloo/transport/device.h"
diff --git a/pytorch/src/main/resources/org/bytedeco/pytorch/presets/torch_cuda_include.h b/pytorch/src/main/resources/org/bytedeco/pytorch/presets/torch_cuda_include.h
index 7c766687c23..280f36fc877 100644
--- a/pytorch/src/main/resources/org/bytedeco/pytorch/presets/torch_cuda_include.h
+++ b/pytorch/src/main/resources/org/bytedeco/pytorch/presets/torch_cuda_include.h
@@ -1,30 +1,31 @@
 #include "c10/util/ArrayRef.h"
 
 // Included by
-// ATen/cudnn/Descriptors.h
 // ATen/cudnn/Types.h
-// c10/cuda/CUDAGuard.h
+// ATen/cudnn/Descriptors.h
+// ATen/cuda/CUDAEvent.h
 // torch/csrc/inductor/aoti_runner/model_container_runner_cuda.h
-#include "ATen/cuda/CUDAContextLight.h"
-#include "c10/cuda/CUDAStream.h"
-#include "ATen/cuda/CUDAContext.h"
+
+#include "ATen/cudnn/cudnn-wrapper.h"
 #include "c10/core/impl/GPUTrace.h"
-#include "c10/cuda/CUDADeviceAssertionHost.h"
+//#include "c10/cuda/impl/cuda_cmake_macros.h"
 #include "c10/cuda/CUDAMacros.h"
-#include "c10/cuda/impl/cuda_cmake_macros.h"
+#include "c10/cuda/CUDADeviceAssertionHost.h"
 #include "c10/cuda/CUDAMiscFunctions.h",
 #include "c10/cuda/CUDAException.h",
 #include "c10/cuda/CUDAFunctions.h",
+#include "ATen/cuda/CUDAContextLight.h"
+#include "c10/cuda/CUDAStream.h"
 #include "ATen/cuda/Exceptions.h"
-#include "ATen/cudnn/cudnn-wrapper.h"
+#include "ATen/cuda/CUDAContext.h"
 #include "ATen/cuda/ATenCUDAGeneral.h"
-#include "ATen/cudnn/Utils.h"
 #include "ATen/cudnn/Handle.h"
+#include "ATen/cudnn/Utils.h"
 #include "c10/cuda/CUDAGraphsC10Utils.h"
-#include "c10/util/ApproximateClock.h"
 #include "c10/cuda/CUDACachingAllocator.h",
 #include "c10/cuda/impl/CUDAGuardImpl.h"
-#include "ATen/cudnn/Descriptors.h"
-#include "ATen/cudnn/Types.h"
 #include "c10/cuda/CUDAGuard.h"
-#include "torch/csrc/inductor/aoti_runner/model_container_runner_cuda.h"
+#include "ATen/cudnn/Types.h"
+#include "ATen/cudnn/Descriptors.h"
+#include "ATen/cuda/CUDAEvent.h"
+#include "torch/csrc/inductor/aoti_runner/model_container_runner_cuda.h"
\ No newline at end of file
diff --git a/pytorch/src/main/resources/org/bytedeco/pytorch/presets/torch_include.h b/pytorch/src/main/resources/org/bytedeco/pytorch/presets/torch_include.h
index 7b3ae1f98cc..c3c86931b83 100644
--- a/pytorch/src/main/resources/org/bytedeco/pytorch/presets/torch_include.h
+++ b/pytorch/src/main/resources/org/bytedeco/pytorch/presets/torch_include.h
@@ -2,6 +2,9 @@
 // #include 
 // #include 
 // #include 
+// torch/csrc/distributed/c10d/ProcessGroupGloo.hpp
+// torch/csrc/distributed/c10d/PrefixStore.hpp
+// torch/csrc/distributed/c10d/logger.hpp
 // as listed by g++ -H torch/torch.h torch/script.h
 // Excluding:
 // - the ones that fill at::meta at::native and at::_ops namespaces
@@ -13,6 +16,8 @@
 #include "c10/macros/Export.h"
 #include "torch/csrc/Export.h"
 #include "c10/macros/Macros.h"
+#include "c10/util/Lazy.h"
+#include "c10/util/Backtrace.h"
 #include "c10/core/DeviceType.h"
 #include "c10/util/Deprecated.h"
 // #include "c10/util/string_utils.h" // Android only
@@ -156,7 +161,7 @@
 #include "c10/core/impl/InlineStreamGuard.h"
 #include "c10/core/StreamGuard.h"
 #include "c10/util/FunctionRef.h"
-#include "c10/util/intrusive_ptr.h"  // Moved after the definition or its template args
+//#include "c10/util/intrusive_ptr.h"  // Moved after the definition or its template args
 #include "ATen/core/ivalue_inl.h"
 #include "ATen/core/ivalue.h"
 #include "ATen/core/List_inl.h"
@@ -227,6 +232,7 @@
 #include "torch/csrc/autograd/input_buffer.h"
 #include "torch/csrc/autograd/utils/warnings.h"
 #include "torch/csrc/autograd/graph_task.h"
+#include "ATen/BlasBackend.h"
 #include "ATen/core/MT19937RNGEngine.h"
 #include "ATen/CPUGeneratorImpl.h"
 #include "ATen/detail/AcceleratorHooksInterface.h"
@@ -239,7 +245,7 @@
 #include "ATen/detail/HIPHooksInterface.h"
 #include "ATen/detail/IPUHooksInterface.h"
 #include "ATen/detail/MPSHooksInterface.h"
-#include "ATen/detail/ORTHooksInterface.h"
+#include "ATen/detail/MAIAHooksInterface.h"
 #include "ATen/detail/PrivateUse1HooksInterface.h"
 #include "ATen/detail/XPUHooksInterface.h"
 #include "c10/core/QEngine.h"
@@ -324,6 +330,7 @@
 #include "ATen/ops/baddbmm.h"
 #include "ATen/ops/bartlett_window.h"
 #include "ATen/ops/batch_norm.h"
+#include "ATen/ops/batch_norm_backward.h"
 #include "ATen/ops/batch_norm_backward_elemt.h"
 #include "ATen/ops/batch_norm_backward_reduce.h"
 #include "ATen/ops/batch_norm_elemt.h"
@@ -936,6 +943,7 @@
 #include "ATen/ops/result_type.h"
 #include "ATen/ops/retain_grad.h"
 #include "ATen/ops/retains_grad.h"
+#include "ATen/ops/rms_norm.h"
 #include "ATen/ops/rnn_relu.h"
 #include "ATen/ops/rnn_relu_cell.h"
 #include "ATen/ops/rnn_tanh.h"
@@ -1197,6 +1205,8 @@
 #include "torch/csrc/utils/variadic.h"
 #include "torch/csrc/autograd/function.h"
 #include "torch/csrc/autograd/variable_info.h"
+#include "torch/csrc/utils/torch_dispatch_mode.h"
+#include "torch/csrc/dynamo/compiled_autograd.h"
 #include "torch/csrc/autograd/custom_function.h"
 #include "torch/csrc/api/include/torch/autograd.h"
 #include "torch/csrc/api/include/torch/cuda.h"
@@ -1403,6 +1413,7 @@
 #include "torch/csrc/api/include/torch/optim/rmsprop.h"
 #include "torch/csrc/api/include/torch/optim/sgd.h"
 #include "torch/csrc/api/include/torch/optim/schedulers/lr_scheduler.h"
+#include "torch/csrc/api/include/torch/optim/schedulers/reduce_on_plateau_scheduler.h"
 #include "torch/csrc/api/include/torch/optim/schedulers/step_lr.h"
 #include "torch/csrc/api/include/torch/optim.h"
 #include "torch/csrc/api/include/torch/sparse.h"
@@ -1437,4 +1448,30 @@
 #include "torch/csrc/inductor/aoti_runner/model_container_runner.h"
 #include "torch/csrc/inductor/aoti_runner/model_container_runner_cpu.h"
 
+#include "torch/csrc/distributed/c10d/Store.hpp"
+#include "torch/csrc/distributed/c10d/Types.hpp"
+#include "torch/csrc/distributed/c10d/Utils.hpp"
+#include "torch/csrc/distributed/c10d/Work.hpp"
+#include "torch/csrc/distributed/c10d/debug.h"
+#include "torch/csrc/distributed/c10d/Backend.hpp"
+#include "torch/csrc/distributed/c10d/ProcessGroup.hpp"
+#include "torch/csrc/distributed/c10d/comm.hpp"
+#include "torch/csrc/distributed/c10d/default_comm_hooks.hpp"
+#include "c10/util/ApproximateClock.h"
+#include "torch/csrc/distributed/c10d/reducer_timer.hpp"
+// #include "torch/csrc/autograd/functions/basic_ops.h" // Not on Windows
+// #include "torch/csrc/autograd/engine.h" // Not on Windows
+// #include "torch/csrc/distributed/autograd/rpc_messages/autograd_metadata.h" // Not on Windows
+// #include "torch/csrc/distributed/rpc/message.h" // Not on Windows
+// #include "torch/csrc/distributed/rpc/request_callback.h" // Not on Windows
+// #include "torch/csrc/distributed/rpc/types.h" // Not on Windows
+// #include "torch/csrc/distributed/rpc/rpc_agent.h" // Not on Windows
+// #include "torch/csrc/distributed/autograd/functions/recvrpc_backward.h" // Not on Windows
+// #include "torch/csrc/distributed/autograd/functions/sendrpc_backward.h" // Not on Windows
+// #include "torch/csrc/distributed/autograd/context/context.h" // Not on Windows
+#include "torch/csrc/distributed/c10d/reducer.hpp"
+#include "torch/csrc/distributed/c10d/ProcessGroupGloo.hpp"
+#include "torch/csrc/distributed/c10d/PrefixStore.hpp"
+#include "torch/csrc/distributed/c10d/logger.hpp"
+
 #include "datasets.h"