You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is just an idea and I'd love to hear other opinions: I propose adding einsum to einops with a additional arguments:
reduce() has an argument reduction that allows you to specify how an a dimension is reduced to a singleton. I think the usefulness is clear to everyone here. However in the native einsum implementation in various libraries this is not possible. I propose to go even a step further, which I'll elaborate here:
A matrix-vector multiplication like
einsum('i j, j -> i', A, x)
has two parts: On the one hand the reduction via sum, but on the other hand we have a binary operation (the product) between the entries of the two operands. As mentioned above while einops.reduce allows for custom reductions, none (afaik) of the native einsum functions allow for those, which would be very nice to have in einsum.
But my main point is that it would be very nice to see if the operation (by default the product) could also be customized.
It is maybe a far fetched example, but let's consider tropical algebra:
In tropical algebra we replace the usual + operation by max (or equivalently min) and the * operation by +.
So I'd imagine a "tropical matrix multiplication" as something like
einsum('i j, j -> k', A, x, reduction='max', operation='add')
(Note that tropical algebra looks quite esoteric but it is not: If you consider a regular convolution and look at the tropical counterpart
you get the morphological operations.)
Another example: Let's say we have a vector primes and a matrix of exponents and would like to get the actual numbers these exponents represent:
einsum('i, j i -> j', primes, exponents, reduction='prod', operation='pow')
Now we could also apply this to boolean values:
accuracy = einsum('i, i ->', predictions, ground_truth, reduction='mean', operation='and')
So with this hypothetical (so far) einsum combines the power of custom reductions that we already know and love, and combines it with the
power of custom operations that could all be executed in one go. And we would all the advantages of extended einops notation that the native einsums lack, too. (This has already
been suggested in #73, but I wanted to make an argument for the custom operation.)
The text was updated successfully, but these errors were encountered:
If anyone is still interested in this, I create a package to do this with PyTorch. It's a little rough around the edges and currently only supports python >= 3.11 and torch >= 2.0. I will work to expand the project to more versions of python soon. https://github.com/Hprairie/einfunc
This is just an idea and I'd love to hear other opinions: I propose adding
einsum
toeinops
with a additional arguments:reduce()
has an argumentreduction
that allows you to specify how an a dimension is reduced to a singleton. I think the usefulness is clear to everyone here. However in the nativeeinsum
implementation in various libraries this is not possible. I propose to go even a step further, which I'll elaborate here:A matrix-vector multiplication like
has two parts: On the one hand the reduction via
sum
, but on the other hand we have a binary operation (the product) between the entries of the two operands. As mentioned above whileeinops.reduce
allows for custom reductions, none (afaik) of the nativeeinsum
functions allow for those, which would be very nice to have ineinsum
.But my main point is that it would be very nice to see if the operation (by default the product) could also be customized.
It is maybe a far fetched example, but let's consider tropical algebra:
In tropical algebra we replace the usual
+
operation bymax
(or equivalentlymin
) and the*
operation by+
.So I'd imagine a "tropical matrix multiplication" as something like
(Note that tropical algebra looks quite esoteric but it is not: If you consider a regular convolution and look at the tropical counterpart
you get the morphological operations.)
Another example: Let's say we have a vector
primes
and a matrix ofexponents
and would like to get the actual numbers theseexponents
represent:Now we could also apply this to boolean values:
So with this hypothetical (so far)
einsum
combines the power of custom reductions that we already know and love, and combines it with thepower of custom operations that could all be executed in one go. And we would all the advantages of extended
einops
notation that the nativeeinsum
s lack, too. (This has alreadybeen suggested in #73, but I wanted to make an argument for the custom
operation
.)The text was updated successfully, but these errors were encountered: