Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PyTorch] Update to 2.1 #1426

Merged
merged 26 commits into from
Nov 10, 2023
Merged
Show file tree
Hide file tree
Changes from 21 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
92f7ad8
Add item_bool
HGuillemet Oct 11, 2023
ebdcdaa
Update Pytorch to 2.1
HGuillemet Oct 16, 2023
7dfa27e
Remove useless classes
HGuillemet Oct 16, 2023
672cdfa
Add missing gen classes. Remove useless classes.
HGuillemet Oct 16, 2023
4248355
Add CUDACachingAllocator
HGuillemet Oct 17, 2023
49f2f18
Update MNIST sample in README
HGuillemet Oct 17, 2023
57d89d0
Skip not-exported function
HGuillemet Oct 17, 2023
1660fae
gen update
HGuillemet Oct 17, 2023
af6b64e
Add CUDAAllocator.recordHistory
HGuillemet Oct 18, 2023
2a89d39
Add TensorBase.data_ptr_byte
HGuillemet Oct 18, 2023
db626e1
Skip not exported CUDACachingAllocator::format_size
HGuillemet Oct 23, 2023
99dbdad
Map generic data loaders
HGuillemet Oct 23, 2023
1dc9d4f
Accept Java arrays for primitive ArrayRef
HGuillemet Oct 24, 2023
abf565b
Fix get_batch argument type
HGuillemet Oct 24, 2023
8499486
Remove GatheredContextSupplier.java
HGuillemet Oct 25, 2023
95496c6
Restore missing classes from torch::jit
HGuillemet Oct 27, 2023
fe140fd
Update CUDA library paths to 12.3
HGuillemet Oct 30, 2023
4ffcc18
Try to update CUDA archs to "5.0;6.0;7.0;8.0+PTX" for PyTorch
saudet Oct 31, 2023
49668cb
Try to update CUDA archs to "5.0;6.0;7.0;8.0;9.0" for PyTorch
HGuillemet Nov 1, 2023
2dfcc32
Add item_byte and data_ptr_bool
HGuillemet Nov 3, 2023
4fc9e28
Add include_list.pl
HGuillemet Nov 3, 2023
d1e473d
Restore parse order of 2.0.1
HGuillemet Nov 4, 2023
df1e13e
Make register_module generic
HGuillemet Nov 6, 2023
6fcfb80
Revert renaming of `torch::jit::load`
HGuillemet Nov 6, 2023
18dda32
Revert change in README concerning register_module
HGuillemet Nov 7, 2023
9a7e6c2
Update CHANGELOG.md and fix nits
saudet Nov 9, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
14 changes: 7 additions & 7 deletions pytorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Introduction
------------
This directory contains the JavaCPP Presets module for:

* PyTorch 2.0.1 https://pytorch.org/
* PyTorch 2.1.0 https://pytorch.org/

Please refer to the parent README.md file for more detailed information about the JavaCPP Presets.

Expand Down Expand Up @@ -48,14 +48,14 @@ We can use [Maven 3](http://maven.apache.org/) to download and install automatic
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>2.0.1-1.5.10-SNAPSHOT</version>
<version>2.1.0-1.5.10-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies required to use CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>2.0.1-1.5.10-SNAPSHOT</version>
<version>2.1.0-1.5.10-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled CUDA, cuDNN, and NCCL -->
Expand Down Expand Up @@ -93,9 +93,9 @@ public class SimpleMNIST {
static class Net extends Module {
Net() {
// Construct and register two Linear submodules.
fc1 = register_module("fc1", new LinearImpl(784, 64));
fc2 = register_module("fc2", new LinearImpl(64, 32));
fc3 = register_module("fc3", new LinearImpl(32, 10));
register_module("fc1", fc1 = new LinearImpl(784, 64));
register_module("fc2", fc2 = new LinearImpl(64, 32));
register_module("fc3", fc3 = new LinearImpl(32, 10));
saudet marked this conversation as resolved.
Show resolved Hide resolved
}

// Implement the Net's algorithm.
Expand All @@ -109,7 +109,7 @@ public class SimpleMNIST {
}

// Use one of many "standard library" modules.
LinearImpl fc1 = null, fc2 = null, fc3 = null;
final LinearImpl fc1, fc2, fc3;
}

public static void main(String[] args) throws Exception {
Expand Down
4 changes: 2 additions & 2 deletions pytorch/cppbuild.sh
Original file line number Diff line number Diff line change
Expand Up @@ -27,15 +27,15 @@ if [[ "$EXTENSION" == *gpu ]]; then
export USE_CUDNN=1
export USE_FAST_NVCC=0
export CUDA_SEPARABLE_COMPILATION=OFF
export TORCH_CUDA_ARCH_LIST="5.0;6.0;7.0+PTX"
export TORCH_CUDA_ARCH_LIST="5.0;6.0;7.0;8.0;9.0"
fi

export PYTHON_BIN_PATH=$(which python3)
if [[ $PLATFORM == windows* ]]; then
export PYTHON_BIN_PATH=$(which python.exe)
fi

PYTORCH_VERSION=2.0.1
PYTORCH_VERSION=2.1.0

mkdir -p "$PLATFORM$EXTENSION"
cd "$PLATFORM$EXTENSION"
Expand Down
65 changes: 65 additions & 0 deletions pytorch/include_list.pl
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
#!/bin/perl

# Must be run at from javacpp-presets/pytorch after cppbuild.sh has been run
# for linux-x86_64-gpu
saudet marked this conversation as resolved.
Show resolved Hide resolved

# Generate the lists of includes to parse, in order, from the output
# of g++ -H
# Used to update src/main/resources/org/bytedeco/pytorch/presets/*

use strict;
use warnings;

my %incs;
my @inc_per_depth;

sub flush($) {
my $min_depth = shift;
for (my $d = @inc_per_depth - 1; $d >= $min_depth; $d--) {
if ($inc_per_depth[$d]) {
foreach my $i (@{$inc_per_depth[$d]}) {
print "#include \"$i\"\n";
$incs{$i} = 1;
}
undef $inc_per_depth[$d];
}
}
}

sub go {
my $path = join ' ', @_;

my @inc = `g++ -I torch/csrc/api/include/ -I. -H $path -E 2>&1 > /dev/null`;
foreach my $i (@inc) {
chomp $i;
my ($depth, $f) = $i =~ /^(\.+)\s(.*\.h)$/;
next unless $depth;
$depth = length($depth);
$f =~ s#^\./##;
next if $f =~ m#^/
|^ATen/ops/\w+_native\.h$
|^ATen/ops/\w+_meta\.h$
|^ATen/ops/\w+_ops\.h$
|^ATen/ops/_\w+\.h$#x
or $incs{$f};
flush($depth);
my $incs = $inc_per_depth[$depth];
$incs = $inc_per_depth[$depth] = [] unless $incs;
push @$incs, $f;
}
flush(0);
}

chdir "cppbuild/linux-x86_64-gpu/pytorch/torch/include";

go('torch/csrc/api/include/torch/torch.h', 'torch/script.h');

print <<EOF;

// Included by
// ATen/cudnn/Descriptors.h
// ATen/cudnn/Types.h
// c10/cuda/CUDAGuard.h
EOF

go('ATen/cudnn/Descriptors.h', 'ATen/cudnn/Types.h', 'c10/cuda/CUDAGuard.h', '-I/opt/cuda/targets/x86_64-linux/include/');
2 changes: 1 addition & 1 deletion pytorch/platform/gpu/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>2.0.1-${project.parent.version}</version>
<version>2.1.0-${project.parent.version}</version>
<name>JavaCPP Presets Platform GPU for PyTorch</name>

<properties>
Expand Down
2 changes: 1 addition & 1 deletion pytorch/platform/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>2.0.1-${project.parent.version}</version>
<version>2.1.0-${project.parent.version}</version>
<name>JavaCPP Presets Platform for PyTorch</name>

<properties>
Expand Down
2 changes: 1 addition & 1 deletion pytorch/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch</artifactId>
<version>2.0.1-${project.parent.version}</version>
<version>2.1.0-${project.parent.version}</version>
<name>JavaCPP Presets for PyTorch</name>

<dependencies>
Expand Down
4 changes: 2 additions & 2 deletions pytorch/samples/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,14 @@
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>2.0.1-1.5.10-SNAPSHOT</version>
<version>2.1.0-1.5.10-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies required to use CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>2.0.1-1.5.10-SNAPSHOT</version>
<version>2.1.0-1.5.10-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled CUDA, cuDNN, and NCCL -->
Expand Down

This file was deleted.

2 changes: 1 addition & 1 deletion pytorch/src/gen/java/org/bytedeco/pytorch/Allocator.java
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ public class Allocator extends Pointer {
// is guaranteed to return a unique_ptr with this deleter attached;
// it means the rawAllocate and rawDeallocate APIs are safe to use.
// This function MUST always return the same BoundDeleter.
public native @Cast("c10::DeleterFnPtr") PointerConsumer raw_deleter();
public native PointerConsumer raw_deleter();
public native Pointer raw_allocate(@Cast("size_t") long n);
public native void raw_deallocate(Pointer ptr);
}
4 changes: 4 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/AnyModule.java
Original file line number Diff line number Diff line change
Expand Up @@ -227,6 +227,8 @@ public class AnyModule extends Pointer {
private native void allocate(@SharedPtr @Cast({"", "std::shared_ptr<torch::nn::ReplicationPad1dImpl>"}) ReplicationPad1dImpl module);
public AnyModule(ConstantPad1dImpl module) { super((Pointer)null); allocate(module); }
private native void allocate(@SharedPtr @Cast({"", "std::shared_ptr<torch::nn::ConstantPad1dImpl>"}) ConstantPad1dImpl module);
public AnyModule(ZeroPad1dImpl module) { super((Pointer)null); allocate(module); }
private native void allocate(@SharedPtr @Cast({"", "std::shared_ptr<torch::nn::ZeroPad1dImpl>"}) ZeroPad1dImpl module);
public AnyModule(AvgPool1dImpl module) { super((Pointer)null); allocate(module); }
private native void allocate(@SharedPtr @Cast({"", "std::shared_ptr<torch::nn::AvgPool1dImpl>"}) AvgPool1dImpl module);
public AnyModule(MaxPool1dImpl module) { super((Pointer)null); allocate(module); }
Expand Down Expand Up @@ -267,6 +269,8 @@ public class AnyModule extends Pointer {
private native void allocate(@SharedPtr @Cast({"", "std::shared_ptr<torch::nn::ReplicationPad3dImpl>"}) ReplicationPad3dImpl module);
public AnyModule(ConstantPad3dImpl module) { super((Pointer)null); allocate(module); }
private native void allocate(@SharedPtr @Cast({"", "std::shared_ptr<torch::nn::ConstantPad3dImpl>"}) ConstantPad3dImpl module);
public AnyModule(ZeroPad3dImpl module) { super((Pointer)null); allocate(module); }
private native void allocate(@SharedPtr @Cast({"", "std::shared_ptr<torch::nn::ZeroPad3dImpl>"}) ZeroPad3dImpl module);
public AnyModule(AvgPool3dImpl module) { super((Pointer)null); allocate(module); }
private native void allocate(@SharedPtr @Cast({"", "std::shared_ptr<torch::nn::AvgPool3dImpl>"}) AvgPool3dImpl module);
public AnyModule(MaxPool3dImpl module) { super((Pointer)null); allocate(module); }
Expand Down
16 changes: 4 additions & 12 deletions pytorch/src/gen/java/org/bytedeco/pytorch/ArgumentDef.java
Original file line number Diff line number Diff line change
Expand Up @@ -38,18 +38,10 @@ public class ArgumentDef extends Pointer {
return new ArgumentDef((Pointer)this).offsetAddress(i);
}

public static class GetTypeFn extends FunctionPointer {
static { Loader.load(); }
/** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
public GetTypeFn(Pointer p) { super(p); }
protected GetTypeFn() { allocate(); }
private native void allocate();
public native @ByVal Type.TypePtr call();
}
public native GetTypeFn getTypeFn(); public native ArgumentDef getTypeFn(GetTypeFn setter);
public native GetTypeFn getFakeTypeFn(); public native ArgumentDef getFakeTypeFn(GetTypeFn setter);
public native TypeSupplier getTypeFn(); public native ArgumentDef getTypeFn(TypeSupplier setter);
public native TypeSupplier getFakeTypeFn(); public native ArgumentDef getFakeTypeFn(TypeSupplier setter);
public ArgumentDef() { super((Pointer)null); allocate(); }
private native void allocate();
public ArgumentDef(GetTypeFn getTypeFn, GetTypeFn getFakeTypeFn) { super((Pointer)null); allocate(getTypeFn, getFakeTypeFn); }
private native void allocate(GetTypeFn getTypeFn, GetTypeFn getFakeTypeFn);
public ArgumentDef(TypeSupplier getTypeFn, TypeSupplier getFakeTypeFn) { super((Pointer)null); allocate(getTypeFn, getFakeTypeFn); }
private native void allocate(TypeSupplier getTypeFn, TypeSupplier getFakeTypeFn);
}
6 changes: 6 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/AutogradMeta.java
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,12 @@ public class AutogradMeta extends AutogradMetaInterface {



// The post_acc_grad_hooks_ field stores only Python hooks
// (PyFunctionTensorPostAccGradHooks) that are called after the
// .grad field has been accumulated into. This is less complicated
// than the hooks_ field, which encapsulates a lot more.
public native @UniquePtr @Cast({"", "", "std::unique_ptr<torch::autograd::PostAccumulateGradHook>&&"}) PostAccumulateGradHook post_acc_grad_hooks_(); public native AutogradMeta post_acc_grad_hooks_(PostAccumulateGradHook setter);

// Only meaningful on leaf variables (must be false otherwise)
public native @Cast("bool") boolean requires_grad_(); public native AutogradMeta requires_grad_(boolean setter);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ private native void allocate(

public native void set_inference_mode(@Cast("bool") boolean enabled);

public native void set_multithreading_enabled(@Cast("bool") boolean mulithreading_enabled);
public native void set_multithreading_enabled(@Cast("bool") boolean multithreading_enabled);

public native void set_view_replay_enabled(@Cast("bool") boolean view_replay_enabled);

Expand Down
49 changes: 49 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/BackendMeta.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
// Targeted by JavaCPP version 1.5.10-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

import org.bytedeco.pytorch.Allocator;
import org.bytedeco.pytorch.Function;
import org.bytedeco.pytorch.functions.*;
import org.bytedeco.pytorch.Module;
import org.bytedeco.javacpp.annotation.Cast;
import java.nio.*;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.annotation.*;

import static org.bytedeco.javacpp.presets.javacpp.*;
import static org.bytedeco.openblas.global.openblas_nolapack.*;
import static org.bytedeco.openblas.global.openblas.*;

import static org.bytedeco.pytorch.global.torch.*;


// For ease of copy pasting
// #if 0
// #endif

/**
* This structure is intended to hold additional metadata of the specific device
* backend.
**/
@Namespace("c10") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class BackendMeta extends Pointer {
static { Loader.load(); }
/** Default native constructor. */
public BackendMeta() { super((Pointer)null); allocate(); }
/** Native array allocator. Access with {@link Pointer#position(long)}. */
public BackendMeta(long size) { super((Pointer)null); allocateArray(size); }
/** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
public BackendMeta(Pointer p) { super(p); }
private native void allocate();
private native void allocateArray(long size);
@Override public BackendMeta position(long position) {
return (BackendMeta)super.position(position);
}
@Override public BackendMeta getPointer(long i) {
return new BackendMeta((Pointer)this).offsetAddress(i);
}

public native @ByVal BackendMetaRef clone(
@Const @ByRef BackendMetaRef ptr);
}
Loading