Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Pytorch] New version of the presets #1360

Merged
merged 71 commits into from
Jul 25, 2023
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
71 commits
Select commit Hold shift + click to select a range
96e3a65
Reorganization, use of new JavaCPP features, more mapping
HGuillemet Apr 16, 2023
9152ff6
Add missing exports in module-info
HGuillemet May 22, 2023
a80ea2e
Add 2 missing includes, reindent
HGuillemet May 23, 2023
809f2c3
Add missing includes
HGuillemet May 23, 2023
42e877c
Remove 3 classes not in API
HGuillemet May 23, 2023
ad9e7d0
Update gen
HGuillemet May 23, 2023
03af32c
Fix Module::apply and JitModule::apply
HGuillemet May 23, 2023
50c2d70
Remove includes needing CUDA installed
HGuillemet May 23, 2023
f11a5c4
Fix windows build.
HGuillemet May 24, 2023
9001e16
Skip some "internal-only" functions
HGuillemet May 24, 2023
309b8f4
Update gen
HGuillemet May 24, 2023
c499367
Fix make_generator for Windows
HGuillemet May 24, 2023
ebb9249
Move cuda-specific to torch_cuda
HGuillemet May 27, 2023
c64ff76
Add nvfuser to preloads
HGuillemet May 27, 2023
8ec1916
Exclude more non-exported symbols
HGuillemet May 27, 2023
534fe1b
gen update
HGuillemet May 27, 2023
5b198c1
cuda gen update
HGuillemet May 27, 2023
7fb3268
Merge 2.0.1 changes from master
HGuillemet May 29, 2023
7527850
Skip EnumHolder::is
HGuillemet May 29, 2023
2c293c5
Add include path for CUDA on Windows
HGuillemet May 30, 2023
d720854
Fix torch_cuda windows linking
HGuillemet May 30, 2023
a9b3d2a
Fix torch_cuda windows linking
HGuillemet May 30, 2023
4f53b93
Change TORCH_CUDA_ARCH_LIST
HGuillemet May 31, 2023
0fc6776
* Upgrade presets for FFmpeg 6.0, HDF5 1.14.1, LLVM 16.0.4, NVIDIA V…
saudet May 28, 2023
0324667
* Add new `SampleJpegEncoder` code for nvJPEG module of CUDA (pull #…
devjeonghwan May 31, 2023
ec64a17
Merge branch 'master' into hg_pytorch
HGuillemet Jun 1, 2023
3e3fe5c
Add rm in deploy-centos to preserve disk space
HGuillemet Jun 2, 2023
b6d0123
Change TORCH_CUDA_ARCH_LIST
HGuillemet Jun 2, 2023
7333646
Change TORCH_CUDA_ARCH_LIST.
HGuillemet Jun 3, 2023
c556443
Merge remote-tracking branch 'origin/master' into hg_pytorch
HGuillemet Jun 7, 2023
1b5b94f
Change version to 2.0.1-new. gen update.
HGuillemet Jun 7, 2023
038b07a
Revert TORCH_CUDA_ARCH_LIST to 5.0+PTX
HGuillemet Jun 9, 2023
97f4aaa
Merge remote-tracking branch 'origin/master' into hg_pytorch
HGuillemet Jun 10, 2023
6c9188e
Update to 1.5.10-SNAPSHOT
HGuillemet Jun 10, 2023
0b20648
Deploy on Ubuntu instead of Centos
HGuillemet Jun 10, 2023
9263f7c
Try to fix CUDA builds on Ubuntu
saudet Jun 12, 2023
657ce64
Fix CUDA builds on Ubuntu some more
saudet Jun 12, 2023
49a73de
Fix incorrect versions
saudet Jun 12, 2023
ddab4a5
Fix workflow for ccache
saudet Jun 12, 2023
12b4523
Revert unnecessary changes to deploy-centos/action.yml
saudet Jun 12, 2023
47a3e16
Load include list from resources in init().
HGuillemet Jun 13, 2023
15012ba
Use C format for list of parsed headers
HGuillemet Jun 14, 2023
ac34d7c
Link jnitorch_cuda with cudart on windows
HGuillemet Jun 15, 2023
c74a931
Fix linking jni torch_cuda with cudart
HGuillemet Jun 16, 2023
808a2c8
Add linking jni torch_cuda with cusparse
HGuillemet Jun 16, 2023
14fed5e
Add linking jni torch_cuda with nvJitLink
HGuillemet Jun 16, 2023
4d5fd3d
Check against parser class name instead of parser class.
HGuillemet Jun 20, 2023
01e2996
Merge branch 'master' into hg_pytorch
HGuillemet Jun 21, 2023
f341fd5
Simplify initIncludes
HGuillemet Jun 21, 2023
58372cb
Revert nvJitLink linking
HGuillemet Jun 26, 2023
3ba2c44
Fix cusolver version in preloads
HGuillemet Jun 26, 2023
fb172f3
Merge branch 'master' into hg_pytorch
HGuillemet Jun 26, 2023
992ed3a
Add GenericDictEntryRef
HGuillemet Jul 7, 2023
ee49627
Cleanup OrderedDict
HGuillemet Jul 7, 2023
34c7ec8
Changes to functions after bytedeco/javacpp@d8b1890
HGuillemet Jul 8, 2023
07dfacd
Add missing gen files for OrderedDict
HGuillemet Jul 8, 2023
8c025db
Remove useless mapping after bytedeco/javacpp@2dacec9
HGuillemet Jul 9, 2023
3c95bef
Update gen after bytedeco/javacpp@ec90945
HGuillemet Jul 10, 2023
c81d251
Add `@NoOffset` on Call
HGuillemet Jul 10, 2023
4ebe97c
Workaround for TransformerActivation.get2 returning a std::function
HGuillemet Jul 11, 2023
43db4eb
Rename c10::variant instances for consistency
HGuillemet Jul 11, 2023
2edaa3b
Fix TensorActivation.get2 now returning a TensorMapper. Change access…
HGuillemet Jul 11, 2023
aba8fd8
Merge remote-tracking branch 'origin/master' into hg_pytorch
HGuillemet Jul 11, 2023
83dce9c
Remove TensorActivation.get2
HGuillemet Jul 12, 2023
8a28331
Merge branch 'master' into hg_pytorch
HGuillemet Jul 22, 2023
eff0259
Update gen after bytedeco/javacpp@8646e97
HGuillemet Jul 22, 2023
f8cd7ec
Update CHANGELOG.md and fix nits
saudet Jul 22, 2023
c14b39a
Add missing at::sqrt(Tensor) and other complex math operators
HGuillemet Jul 23, 2023
2c4ff2d
Add ska::detailv3::log2 masked by last commit
HGuillemet Jul 23, 2023
8c16124
Skip one-element constructor for all ArrayRef instances
HGuillemet Jul 23, 2023
f5cc0be
Add ArrayRef constructor taking a std::vector
HGuillemet Jul 23, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions pytorch/src/gen/java/org/bytedeco/pytorch/BlockArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,7 @@ public class BlockArrayRef extends Pointer {

/** Construct an ArrayRef from a single element. */
// TODO Make this explicit
public BlockArrayRef(@ByPtrRef Block OneElt) { super((Pointer)null); allocate(OneElt); }
private native void allocate(@ByPtrRef Block OneElt);

saudet marked this conversation as resolved.
Show resolved Hide resolved

/** Construct an ArrayRef from a pointer and length. */
public BlockArrayRef(@Cast("torch::jit::Block**") PointerPointer data, @Cast("size_t") long length) { super((Pointer)null); allocate(data, length); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class DimnameArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public DimnameArrayRef(@ByRef DimnameVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef DimnameVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
2 changes: 2 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/DoubleArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@ public class DoubleArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public DoubleArrayRef(@ByRef DoubleVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef DoubleVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
2 changes: 2 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/IValueArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class IValueArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public IValueArrayRef(@ByRef IValueVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef IValueVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
2 changes: 2 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/LongArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@ public class LongArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public LongArrayRef(@ByRef LongVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef LongVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class SavedVariableArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public SavedVariableArrayRef(@ByRef SavedVariableVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef SavedVariableVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class ScalarTypeArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public ScalarTypeArrayRef(@ByRef ScalarTypeVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef ScalarTypeVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
2 changes: 2 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/SizeTArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class SizeTArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public SizeTArrayRef(@ByRef SizeTVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef SizeTVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
2 changes: 2 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/StrideArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class StrideArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public StrideArrayRef(@ByRef StrideVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef StrideVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
2 changes: 2 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/StringArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class StringArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public StringArrayRef(@ByRef StringVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef StringVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
2 changes: 2 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/SymIntArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class SymIntArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public SymIntArrayRef(@ByRef SymIntVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef SymIntVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
2 changes: 2 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/SymbolArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class SymbolArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public SymbolArrayRef(@ByRef SymbolVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef SymbolVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
2 changes: 2 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/TensorArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class TensorArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public TensorArrayRef(@ByRef TensorVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef TensorVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,12 +59,12 @@ public class TensorIndexArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public TensorIndexArrayRef(@ByRef TensorIndexVector Vec) { super((Pointer)null); allocate(Vec); }
private native void allocate(@ByRef TensorIndexVector Vec);

/** Construct an ArrayRef from a std::array */

/** Construct an ArrayRef from a C array. */
public TensorIndexArrayRef(@ByRef TensorIndexVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef TensorIndexVector vec);

/** Construct an ArrayRef from a std::initializer_list. */
/* implicit */
Expand Down
2 changes: 2 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/TypeArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ public class TypeArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public TypeArrayRef(@ByRef TypeVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef TypeVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
5 changes: 3 additions & 2 deletions pytorch/src/gen/java/org/bytedeco/pytorch/ValueArrayRef.java
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,7 @@ public class ValueArrayRef extends Pointer {

/** Construct an ArrayRef from a single element. */
// TODO Make this explicit
public ValueArrayRef(@ByPtrRef Value OneElt) { super((Pointer)null); allocate(OneElt); }
private native void allocate(@ByPtrRef Value OneElt);


/** Construct an ArrayRef from a pointer and length. */
public ValueArrayRef(@Cast("torch::jit::Value**") PointerPointer data, @Cast("size_t") long length) { super((Pointer)null); allocate(data, length); }
Expand All @@ -64,6 +63,8 @@ public class ValueArrayRef extends Pointer {
// The enable_if stuff here makes sure that this isn't used for
// std::vector<bool>, because ArrayRef can't work on a std::vector<bool>
// bitfield.
public ValueArrayRef(@ByRef ValueVector vec) { super((Pointer)null); allocate(vec); }
private native void allocate(@ByRef ValueVector vec);

/** Construct an ArrayRef from a std::array */

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5672,7 +5672,7 @@ public class torch extends org.bytedeco.pytorch.presets.torch {
@Namespace("ska::detailv3") @MemberGetter public static native byte min_lookups();
public static final byte min_lookups = min_lookups();


@Namespace("ska::detailv3") public static native byte log2(@Cast("uint64_t") long value);

@Namespace("ska::detailv3") public static native @Cast("uint64_t") long next_power_of_two(@Cast("uint64_t") long i);

Expand Down
26 changes: 13 additions & 13 deletions pytorch/src/main/java/org/bytedeco/pytorch/presets/torch.java
Original file line number Diff line number Diff line change
Expand Up @@ -749,16 +749,6 @@ public void map(InfoMap infoMap) {
)).put(new Info("c10::ArrayRef<at::Tag>::vec()").skip() // Is there any way to make this work ?
);

infoMap
.put(new Info("c10::ArrayRef<torch::Tensor>(std::vector<torch::Tensor,A>&)").javaText(
"public TensorArrayRef(@ByRef TensorVector Vec) { super((Pointer)null); allocate(Vec); }\n"
+ "private native void allocate(@ByRef TensorVector Vec);"))

.put(new Info("c10::ArrayRef<at::indexing::TensorIndex>(std::vector<at::indexing::TensorIndex,A>&)").javaText(
"public TensorIndexArrayRef(@ByRef TensorIndexVector Vec) { super((Pointer)null); allocate(Vec); }\n"
+ "private native void allocate(@ByRef TensorIndexVector Vec);"))
;


//// c10::List
for (ArrayInfo ai : new ArrayInfo[]{
Expand Down Expand Up @@ -948,7 +938,7 @@ public void map(InfoMap infoMap) {
}) {
infoMap.put(new Info(template("torch::jit::List", t[1])).pointerTypes(t[0]))
.put(new Info(template("torch::jit::ListIterator", t[1])).pointerTypes(t[0] + "Iterator"))
.put(new Info(template("torch::jit::List", t[1]) + "::map").skip()); // Could map if needed
.put(new Info(template("torch::jit::List", t[1]) + "::map").skip()) // Could map if needed
;
}
infoMap.put(new Info("torch::jit::TreeList::const_iterator").cast().pointerTypes("TreeRef"));
Expand Down Expand Up @@ -1843,7 +1833,8 @@ We need either to put an annotation info on each member, or javaName("@NoOffset
// are parsed after complex_math.h and Parser would set the qualified names to the first
// matching cppName it finds in infoMap.
}
infoMap.put(new Info("c10_complex_math::pow(c10::complex<T>&, c10::complex<U>&)").javaText(
infoMap.put(new Info("ska::detailv3::log2").javaNames("log2")) // Same reason
.put(new Info("c10_complex_math::pow(c10::complex<T>&, c10::complex<U>&)").javaText(
"@Namespace(\"c10_complex_math\") public static native @ByVal @Name(\"pow<double,float>\") DoubleComplex pow(@Const @ByRef DoubleComplex x, @Const @ByRef FloatComplex y);\n"
+ "@Namespace(\"c10_complex_math\") public static native @ByVal @Name(\"pow<float,double>\") DoubleComplex pow(@Const @ByRef FloatComplex x, @Const @ByRef DoubleComplex y);\n"
))
Expand Down Expand Up @@ -2327,7 +2318,8 @@ void mapArrayRef(InfoMap infoMap) {
String mainName = cppNames[n++] = template("c10::ArrayRef", vt);
cppNames[n++] = template("at::ArrayRef", vt);
cppNames[n++] = template("torch::ArrayRef", vt);
infoMap.put(new Info(mainName + "(const " + vt + "&)").skip());// Causes SIGSEGV since it just make a pointer to the value
infoMap.put(new Info(mainName + "(const " + vt + "&)").skip())// Causes SIGSEGV since it just make a pointer to the value
.put(new Info(mainName + "(" + vt + "&)").skip());// Parser removes const for non-investigated reasons for some elementTypes (eg Block*)
// With the following info, any operator<<
//infoMap.put(new Info(template("c10::operator <<", vt)).javaNames("shiftLeft"));
}
Expand Down Expand Up @@ -2359,6 +2351,14 @@ void mapArrayRef(InfoMap infoMap) {
infoMap.put(info);
infoMap.put(new Info(cppNamesRIterator).skip());

// Add templated constructor taking a std::vector, if the vector class has been mapped.
// Relies on the fact that std::vector info are created before.
Info vectorInfo = infoMap.getFirst(template("std::vector", elementTypes[0]), false);
if (vectorInfo != null && !elementTypes[0].equals("bool"))
infoMap.put(new Info(template(cppNames[0], template("std::allocator", elementTypes[0])) + "(" + elementTypes[0] + "*)")
.javaText(
"public " + baseJavaName + "ArrayRef(@ByRef " + baseJavaName + "Vector vec) { super((Pointer)null); allocate(vec); }\n"
+ "private native void allocate(@ByRef " + baseJavaName + "Vector vec);"));
}

void mapList(InfoMap infoMap) {
Expand Down