-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
liteeth linux driver performances #1924
Comments
For reference the nuttx liteeth driver also had a similar pattern and very low performance. We (@g2gps) made some similar changes and got a big improvement but I'm not sure those changes are upstream yet. |
cc-ing @shenki (original author of the linux liteeth driver) for situational awareness. For my part, I'm happy to help develop and/or review any improvements to the Linux driver, but won't have any "spare cycles" for that until after mid-May of this year. |
Yes, we are limited by memcpy_io throughput. On microwatt (ppc64) a patch to unroll that operation gave us much better performance: This change never made it upstream. @antonblanchard might be interested in doing something similar for riscv these days. |
Adding dma capability to liteeth would be ideal, maybe something similar to litesdcard using litedma. This is something we have discussed internally and with @enjoy-digital . We could provide some funding for the liteeth gateware dma integration. |
Thanks everyone. It seems there is now quite some interest to get DMA capabilities with LiteEth, so no reason to not make it happen :) We already got some contributions on this but also want to make sure the approach is close to the LiteSDCard/LiteSATA DMA. I'll try to provide a first implementation from the different discussions/PRs. Initial PRs/Implementations that need to be review/could be use as basis: |
Thanks ^^ |
Hi,
I was looking at the maximal bandwidth reacheable with liteth in linux (+ vexii)
https://github.com/litex-hub/linux/blob/master/drivers/net/ethernet/litex/litex_liteeth.c
And it seems there is some stuff which can be done in the linux driver. Currently it kinda cap around 20/25 Mbps (via iperf3). It seems that the bottleneck are :
On the scope i get ~20 cycles per byte transfered (100Mhz cpu). Meaning the actual implementation will bottlneck at 40 Mbps just on this.
Running iperf3 on localhost toward localhost get around 230 Mbps (userspace isn't the bottleneck)
So, i'm not realy sure of it, but maybe with some optimization we may saturate the 100 Mbps ethernet <3
The text was updated successfully, but these errors were encountered: