Skip to content

Extending dpctl to support CUDA #1124

Answered by diptorupd
diptorupd asked this question in Ideas
Discussion options

You must be logged in to vote

@oleksandr-pavlyk As expected, running a kernel with a default compiled dpctl as-is will not work:

>>> a = dpt.arange(30, device=dev); b = dpt.roll(dpt.concat((dpt.ones(15, dtype=dpt.bool, device=dev), dpt.zeros(15, dtype=dpt.bool, device=dev))), 8); c = a[b]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/diptorupd/Desktop/devel/dpctl/dpctl/tensor/_ctors.py", line 642, in arange
    hev, _ = ti._linspace_step(_start, _step, res, sycl_queue)
RuntimeError: Native API failed. Native API returns: -42 (PI_ERROR_INVALID_BINARY) -42 (PI_ERROR_INVALID_BINARY)

However, after the following small patch

Author: Diptorup Deb <diptorup.deb@intel.com>  2023-03-14 …

Replies: 3 comments 2 replies

Comment options

You must be logged in to vote
0 replies
Comment options

diptorupd
Mar 15, 2023
Maintainer Author

You must be logged in to vote
2 replies
@diptorupd
Comment options

diptorupd Mar 15, 2023
Maintainer Author

@ogrisel
Comment options

Answer selected by diptorupd
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Ideas
Labels
None yet
3 participants