Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PyTorch] Update to 2.1 #1426

Merged
merged 26 commits into from
Nov 10, 2023
Merged
Changes from 1 commit
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
92f7ad8
Add item_bool
HGuillemet Oct 11, 2023
ebdcdaa
Update Pytorch to 2.1
HGuillemet Oct 16, 2023
7dfa27e
Remove useless classes
HGuillemet Oct 16, 2023
672cdfa
Add missing gen classes. Remove useless classes.
HGuillemet Oct 16, 2023
4248355
Add CUDACachingAllocator
HGuillemet Oct 17, 2023
49f2f18
Update MNIST sample in README
HGuillemet Oct 17, 2023
57d89d0
Skip not-exported function
HGuillemet Oct 17, 2023
1660fae
gen update
HGuillemet Oct 17, 2023
af6b64e
Add CUDAAllocator.recordHistory
HGuillemet Oct 18, 2023
2a89d39
Add TensorBase.data_ptr_byte
HGuillemet Oct 18, 2023
db626e1
Skip not exported CUDACachingAllocator::format_size
HGuillemet Oct 23, 2023
99dbdad
Map generic data loaders
HGuillemet Oct 23, 2023
1dc9d4f
Accept Java arrays for primitive ArrayRef
HGuillemet Oct 24, 2023
abf565b
Fix get_batch argument type
HGuillemet Oct 24, 2023
8499486
Remove GatheredContextSupplier.java
HGuillemet Oct 25, 2023
95496c6
Restore missing classes from torch::jit
HGuillemet Oct 27, 2023
fe140fd
Update CUDA library paths to 12.3
HGuillemet Oct 30, 2023
4ffcc18
Try to update CUDA archs to "5.0;6.0;7.0;8.0+PTX" for PyTorch
saudet Oct 31, 2023
49668cb
Try to update CUDA archs to "5.0;6.0;7.0;8.0;9.0" for PyTorch
HGuillemet Nov 1, 2023
2dfcc32
Add item_byte and data_ptr_bool
HGuillemet Nov 3, 2023
4fc9e28
Add include_list.pl
HGuillemet Nov 3, 2023
d1e473d
Restore parse order of 2.0.1
HGuillemet Nov 4, 2023
df1e13e
Make register_module generic
HGuillemet Nov 6, 2023
6fcfb80
Revert renaming of `torch::jit::load`
HGuillemet Nov 6, 2023
18dda32
Revert change in README concerning register_module
HGuillemet Nov 7, 2023
9a7e6c2
Update CHANGELOG.md and fix nits
saudet Nov 9, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 65 additions & 0 deletions pytorch/include_list.pl
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
#!/bin/perl

# Must be run at from javacpp-presets/pytorch after cppbuild.sh has been run
# for linux-x86_64-gpu
saudet marked this conversation as resolved.
Show resolved Hide resolved

# Generate the lists of includes to parse, in order, from the output
# of g++ -H
# Used to update src/main/resources/org/bytedeco/pytorch/presets/*

use strict;
use warnings;

my %incs;
my @inc_per_depth;

sub flush($) {
my $min_depth = shift;
for (my $d = @inc_per_depth - 1; $d >= $min_depth; $d--) {
if ($inc_per_depth[$d]) {
foreach my $i (@{$inc_per_depth[$d]}) {
print "#include \"$i\"\n";
$incs{$i} = 1;
}
undef $inc_per_depth[$d];
}
}
}

sub go {
my $path = join ' ', @_;

my @inc = `g++ -I torch/csrc/api/include/ -I. -H $path -E 2>&1 > /dev/null`;
foreach my $i (@inc) {
chomp $i;
my ($depth, $f) = $i =~ /^(\.+)\s(.*\.h)$/;
next unless $depth;
$depth = length($depth);
$f =~ s#^\./##;
next if $f =~ m#^/
|^ATen/ops/\w+_native\.h$
|^ATen/ops/\w+_meta\.h$
|^ATen/ops/\w+_ops\.h$
|^ATen/ops/_\w+\.h$#x
or $incs{$f};
flush($depth);
my $incs = $inc_per_depth[$depth];
$incs = $inc_per_depth[$depth] = [] unless $incs;
push @$incs, $f;
}
flush(0);
}

chdir "cppbuild/linux-x86_64-gpu/pytorch/torch/include";

go('torch/csrc/api/include/torch/torch.h', 'torch/script.h');

print <<EOF;

// Included by
// ATen/cudnn/Descriptors.h
// ATen/cudnn/Types.h
// c10/cuda/CUDAGuard.h
EOF

go('ATen/cudnn/Descriptors.h', 'ATen/cudnn/Types.h', 'c10/cuda/CUDAGuard.h', '-I/opt/cuda/targets/x86_64-linux/include/');