Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add loopback test for SLINK #95

Merged
merged 2 commits into from
Jul 1, 2021
Merged

Conversation

LarsAsplund
Copy link
Collaborator

This PR shows how SLINK can be tested from the testbench. It assumes that there is a SW loopback in the CPU which has yet to be implemented. @stnolting I could use some help with that. Once that is done, the verification components need to be connected to the CPU instead of directly to each other as it's done now. See TODO in the testbench.

sim/neorv32_tb.vhd Outdated Show resolved Hide resolved
@stnolting
Copy link
Owner

This is great! Especially the thing with the random traffic generation, since this is something that verifies the correct stream handshake and also the stream software handling.

Just one questions: what is this? (could you provide a link to the according documentation?)

stall_config => new_stall_config(0.05, 1, 10)

Right now, the simple testbench as well as the advanced one do the same thing. This is good for understanding VUnit (this is good for me! 😄) but I think we should actually use the VUnit features to do advanced verification.

So I think we should have a new test program (processor_check_xyz?) only for this testbench setup. Then we could focus more on verification of things like streaming and external bus accesses.

@umarcor
Copy link
Collaborator

umarcor commented Jul 1, 2021

Just one questions: what is this? (could you provide a link to the according documentation?)

stall_config => new_stall_config(0.05, 1, 10)

It means "has a 5% probability of generating an stall that lasts between 1 and 10 cycles". See https://github.com/VUnit/vunit/blob/master/vunit/vhdl/verification_components/src/axi_stream_pkg.vhd#L22-L26.

This is the array_axis_vcs example from VUnit with the stall probability of the slave set to 0% or 50%:

axis_stall

It is exactly the same test, it's also a loopback (but implemented in hardware, instead of a CPU). You can see that execution time is 9us without stall (with probability 0%) and 16us otherwise.

@umarcor
Copy link
Collaborator

umarcor commented Jul 1, 2021

Right now, the simple testbench as well as the advanced one do the same thing. This is good for understanding VUnit (this is good for me! 😄) but I think we should actually use the VUnit features to do advanced verification.

So I think we should have a new test program (processor_check_xyz?) only for this testbench setup. Then we could focus more on verification of things like streaming and external bus accesses.

Absolutely agree. That was the purpose when maintaining a VUnit testbench and also a simple one. After this PR is merged, we should use Wishbone and AXI-Lite VCs for testing the external interface and the bridge. Then, we can remove all the interface related code from the simple testbench.

@stnolting
Copy link
Owner

It means "has a 5% probability of generating an stall that lasts between 1 and 10 cycles". See https://github.com/VUnit/vunit/blob/master/vunit/vhdl/verification_components/src/axi_stream_pkg.vhd#L22-L26.

Thanks for clearing!

After this PR is merged, we should use Wishbone and AXI-Lite VCs for testing the external interface and the bridge. Then, we can remove all the interface related code from the simple testbench.

👍 😄

@stnolting stnolting marked this pull request as ready for review July 1, 2021 17:54
@stnolting stnolting merged commit 9bb3f49 into stnolting:master Jul 1, 2021
@LarsAsplund
Copy link
Collaborator Author

@stnolting Note that this addition doesn't do anything useful yet. There is a need for a SW test that creates that loopback and once that is done the verification components need to be connected to the SLINK of the CPU. Right now they are only talking directly to each other

@stnolting
Copy link
Owner

Yeah I know and that's ok for now. We need to agree on a concept how to actually utilize the verification features provided by this testbench from a CPU point of view.

Just some ideas 🤔

  • create something like sw/example/processor_check2 to have one "large" program that handles all the bus/stream related testings
  • create several simple programs to test only one thing at a time and execute them all in a CI batch

@umarcor
Copy link
Collaborator

umarcor commented Jul 2, 2021

I like the idea of creating several simple programs instead of a large one. That allows some of the tests to fail without breaking all of them. We want to be able to know whether some bug affects all the peripherals/features or just a few of them.

BTW, tests on Windows are failing. I think we forgot to update https://github.com/stnolting/neorv32/blob/master/.github/workflows/Windows.yml#L104-L113 in some of the latest PRs. @stnolting, @LarsAsplund mind having a look at it?

@stnolting
Copy link
Owner

I like the idea of creating several simple programs instead of a large one. That allows some of the tests to fail without breaking all of them. We want to be able to know whether some bug affects all the peripherals/features or just a few of them.

Me too! We could keep everything in sw/example/processor_check and create one unified main.c and select the according test via a compile switch. Or we could have several sub-folders: one for each test.

I would also like to keep the current processor check to be run using the simple testbench and use the new tests to target only the VUnit-based testbench.

BTW, tests on Windows are failing. I think we forgot to update https://github.com/stnolting/neorv32/blob/master/.github/workflows/Windows.yml#L104-L113 in some of the latest PRs. @stnolting, @LarsAsplund mind having a look at it?

I think there is just a flag missing. I will fix that.

@LarsAsplund
Copy link
Collaborator Author

Agree, better start with several small tests to verify individual features. Once we have that we can start thinking about stress tests where we run many things at the same time.

@stnolting
Copy link
Owner

I am not sure yet how to separate the tests. Different files? Different folders? Different flags? 🤔
As soon as there is a perspective, we can start with testing the stream interface in a new feature branch.

@umarcor
Copy link
Collaborator

umarcor commented Jul 4, 2021

@stnolting, I think the point is to have different tests as different main functions in separated files. It's ok if they share headers and some common utils. Then, one step in the job can be used for building all the hex files. Last, the run.py in VUnit can be used for assinging one hex to each run. In order to do that, we should address #35, so we can reuse the same compilation of the RTL sources.

Optionally, we can run VUnit in parallel. However, VUnit can internally paralelise the tests, so I think we won't need more than one job for now. That is, the 2 cores in the machine are enough for getting some time reduction.

@umarcor
Copy link
Collaborator

umarcor commented Jul 4, 2021

With regard to this specific issue, the test is a loopback, so you only need to implement a software function that checks whether something was received and push it back. The VUnit testbench can take care of finalising the simulation after all the data has been sent, received and checked. Therefore, the software can be infinite. That is, first it executes the fixed tests, and then it goes into an infinite loop waiting for AXI Stream data.

That is not the best solution, but it's the simplest for having the VUnit VCs used for something useful. After that, we can focus on #35 and revisit the software and tests.

@stnolting
Copy link
Owner

I think the point is to have different tests as different main functions in separated files. It's ok if they share headers and some common utils. Then, one step in the job can be used for building all the hex files. Last, the run.py in VUnit can be used for assinging one hex to each run. In order to do that, we should address #35, so we can reuse the same compilation of the RTL sources.

👍

With regard to this specific issue, the test is a loopback, so you only need to implement a software function that checks whether something was received and push it back. The VUnit testbench can take care of finalising the simulation after all the data has been sent, received and checked. Therefore, the software can be infinite. That is, first it executes the fixed tests, and then it goes into an infinite loop waiting for AXI Stream data.

Good idea, but the current testbench is waiting for a final report, which is printed via UART after the main function has returned. So we cannot use an eternal stream echo loop here. Anyway, I am sourcing out all tests into separated chunks right now. I'm still not sure how to manage all that, but soon we will have a more flexible test program (hopefully 😉) - also for the stream echo.

@umarcor
Copy link
Collaborator

umarcor commented Jul 5, 2021

Note that the number of data elements that the VCs will send (and expect to receive) through the loopback is defined in the testbench. Hence, you don't need an infinite software procedure, you can set a for loop to n iterations (IIRC, it was set to 100).

EDIT

https://github.com/stnolting/neorv32/blob/master/sim/neorv32_tb.vhd#L229

@LarsAsplund
Copy link
Collaborator Author

LarsAsplund commented Jul 5, 2021

We can set that number from the command line to get a single source.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants