Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adoption of pydoit as the main task management solution #110

Closed
wants to merge 11 commits into from

Conversation

umarcor
Copy link
Collaborator

@umarcor umarcor commented Jul 9, 2021

This is a rather disruptive PR for showcasing the potential benefits of using a Python based orchestration/management solution at the project level, instead of only using makefiles and/or shell scripts.

Context

This topic was discussed both in the context of NEORV32 and in the wider context of any HDL project including software sources and using both open source and vendor EDA tooling.

In NEORV32, we are currently using the following entrypoints, at least:

  • .github/generate-job-matrix.py
  • docs/Makefile
  • sw/example/processor_check/check.sh
  • sw/example/processor_check/Makefile
  • sim/ghdl_sim.sh
  • sim/run.py
  • riscv-arch-test/run_riscv_arch_test.sh
  • setups/examples/Makefile
  • setups/osflow/common.mk
  • setups/quartus/**/*.tcl
  • setups/radiant/**/*.rdf
  • setups/vivado/**/*.tcl

I read my own notes from Open Source Verification Bundle: API | Tool again, and I remembered @Fatsie and @ktbarrett dropping some nice words in gitter.im/hdl/community about pydoit. So, I read about the main features:

NO API: Tasks are described by a python dict (can also be easily customized)

Tasks can execute external process (shell commands) or python code

doit command allows you to list and obtain help/documentation for tasks

Traditional build-tools were created mainly to deal with compile/link process of source code. doit was designed to solve a broader range of workflows.

results from a task can be used by another task without resorting to the creation of intermediate files

DAG Visualisation
create task's dependency-graph image using graphviz

doit core features are quite stable. If there is no recent development, it does NOT mean the project is not being maintained... The project has 100% unit-test code coverage.

That might be exactly what we need to put all the entrypoints above under some umbrella. I read the pydoit: Success Stories too:

I did a survey of what DAG-based workflow execution frameworks were out there—restricting my search to python, active maintenance, good documentation etc. Besides doit I evaluated bonobo, Luigi, and airflow. bonobo didn’t fit my needs for dependency-based processing. Luigi and airflow are very nice, but I didn’t have any particular need for distributed workflows and the heavier weight feel of these platforms. My favorite experience was airflow but it didn’t have (obvious) support for re-entrant processing: running a pipeline from an intermediate stage.

I knew that build-system based frameworks would do exactly what I wanted and not hand me too much cruft, and on that note I found doit. It’s worked perfectly for my needs:

  • The order of tasks is derived from dependencies
  • Tasks can be executed in parallel if they don’t depend on each other
  • Plenty of bells and whistles to relieve the pipeline developer from writing boilerplate code (easy task documentation, discovery of the pipeline state, easy process for re-doing tasks)
  • Very light-weight. In this sense doit is the perfect example of a UNIX-style tool: do one thing and do it well. Luigi and airflow are attractive, but they also suffer from kitchen-sink bloat. If you simply want a pythonic alternative to Makefiles or bash scripts, doit is great solution.
  • It’s easy to build up a library of common tasks that can be reused by multiple doit pipelines.

The motivations for developing and using pydoit are very similar to why I wrote dbhi/run.
Moreover, while reading the docs I found more similarities between the implementation of pydoit and how I had conceived a possible solution.

Therefore, I decided to give pydoit a try using NEORV32 as a proof of concept.

Requirements

Python

doit is a Python tool. Therefore, using it for managing project level tasks implies adding Python as an explicit requirement for "using" NEORV32. For now, it is strictly possible to still use most of the makefiles or shell scripts, but the proposal is to slowly replace "all" the plumbing with pythonic alternatives. More on what "pythonic alternative" means below.

Having Python as a requirement is not a big problem in CI because GitHub Actions environments do have it pre-installed, and doit is a Python only package (it does not rely on any shared library or compiled artifact). It is also rather common to have Python 3 installed on the system nowadays, and it is available through most package managers (apt, dnf, yum, pacman, etc.). Therefore, I would consider this an acceptable requirement, specially given that NEORV32 does already use VUnit for advanced simulation/testing.

Anyway, we should maybe add a requirements.txt file and some comment to the README in order to make it clear to the users.

ghcr.io

We are currently executing implementation and VUnit simulation tasks inside containers which do have Python but which don't have doit preinstalled. In this PR a single dockerfile was added, which installs doit on top of the container from hdl/containers used for implementation. Currently, that image is built right before using it, in the same workflow. However, I would suggest using ghcr.io/stnolting/neorv32 for that purpose.

The GitHub Container registry is accessible with the default github.token available in CI, therefore, there is no additional setup needed for using it. Moreover images can be pulled without authentication. Therefore, both users and developers can use them locally or in their own CI. This is desirable, because it provides consistency and it allows to more easily find bugs/mismatches by decoupling those from the environment.

Nevertheless, note that we have redundancy. Hence, the scripts/entrypoints are tested locally too (e.g. on MSYS2). That helps us ensure that the tasks do not depend on the containers, but they can be used.

@stnolting, if you are ok with this approach, I would create a separated PR to contribute a CI workflow where the "development containers" for NEORV32 are built and pushed to ghcr.io/stnolting/neorv32. Having that merged separatedly will allow having this slightly cleaner.

Wrapping, CLI and pythonism

pydoit is a very interesting project for anyone evaluating alternatives for a seamless migration from make/shell only to some Python based solution. I recommend everyone to have a look at the documentation: pydoit.org/contents. It has lots of examples and every feature is introduced very directly.

The three core concepts/features of pydoit can be summarised using the titles of the sections in the Guide:

  • Tasks
  • More on dependencies
  • Command line interface

Tasks

The main concept in pydoit is that tasks can be either a Python function or a shell command: https://pydoit.org/tasks.html#actions. Python functions can be just regular or can be explicitly written for dealing with the pydoit API. Similarly, shell commands can be executed raw, or can be partially auto-generated using pydoit's features. That is, pydoit wraps calling commands through Popen and handles the repetitive code for different platforms. It allows optionally overriding the options used for calling Popen.

Therefore, the very most important idea to understand is that the main use case for pydoit in the context of HDL projects is to be a wrapper around existing entrypoints. Those can be make, cmake, shell scripts, other python scripts, nodejs, vendor tools... whatever.

With the regard to implementation details, I'm not completely convinced with the NO API design principle. I see the benefits of a dict/string based solution, particularly when composing/manipulating objects by merging/updating them. In that regard, dicts reduce verbosity compared to classes. However, classes and type hints do provide a more robust solution when things go wrong. For instance, it seems possible to define a command as an string and multiple commands as a list of strings. In certain cases, it is possible to provide one command as a list of strings. However, in other cases that fails.
This is not relevant in practice, specially in comparison to make/bash, but it's worth noting for the most experienced Python developers.

Dependencies

pydoit does optionally support specifying dependencies and targets of each task/action: https://pydoit.org/tasks.html#dependencies-targets. By using files as both dependencies and targets, pydoit can be used as a direct replacement for makefiles. However, that is pointless per se (you just use the makefile in first place).
The reason that is useful is the second revelant concept, explained in https://pydoit.org/tasks.html#file-dep-file-dependency:

Different from most build-tools dependencies are on tasks, not on targets. So doit can take advantage of the “execute only if not up-to-date” feature even for tasks that don’t define targets.

That is precisely the main feature that HDL projects are missing in make alike solutions, which do require defining dummy targets as a workaround. E.g., we want to say "first do synthesis, then PnR, then generate a bitstream". We do not want to care about the specific filenames involved in each stage. That's something we want to have defined somewhere, and we want to have meaningful errors if something is wrong, but we do not want to call make my_very_long_and_complex_bitstream_name.bit.

Tasks in pydoit can save computed values (https://pydoit.org/dependencies.html#saving-computed-values) or can get them from previous tasks (https://pydoit.org/dependencies.html#getargs), which are complementary to having dependency files (or tasks) or artifact files (targets).

I did not go deeper into this feature yet, because I wanted this PR to be mostly about wrapping existing entrypoints. However, I think that "redistributing reusable pydoit tasks" can be something we might propose to the community in gitter.im/hdl/community. That is:

Then, allow users to define their own pipelines by composing those tasks, and have doit generate a DAG. Note that I did not yet evaluate the flexibility of pydoit for overriding and dynamically generating the dependencies and, thus, the DAG. However, according to the Success Stories, it's achievable. I did neither consider seriously whether those reusable tasks would be better maintained by each project or have all of them in the same place. That place would obviously not be NEORV32, but maybe edalize, PyFPGA or some other can do that (probably wrapping their own interfaces).

CLI

pydoit seems to be conceived so that all task are visible and callable through a single CLI entrypoint. That is coherent with using it as the main orchestrator/wrapper around other build/run entrypoints. As a result, there is built-in functionality for specifiying the CLI arguments that each task might accept: https://pydoit.org/task_args.html.

It supports short and long argument names, specifying the type, having a choice (enumeration), help string, default value, capturing positional arguments, etc. Overall, it's quite complete, it resembles argparse and the information shown by doit help is helpful.

However, my perception of this area is bittersweet. I am very conditioned because I'm a user of spf13/cobra along with spf13/viper. Those provide a very nice solution for handling CLI, configuration files and environment variable names which are automatically handled as a resolved set of parameters to the developer. However, those are written in golang. Other than that, I am used to pyAttributes, a project by @Paebbels for building CLIs through decorators, which is built on top of argparse. Therefore, I would have liked if tasks in pydoit were not public by default and decorators were used for specifying the CLI parameters. Nevertheless, I believe this is mostly a syntactic issue. The three features I am concerned about are composability of subcommands, help output and forwarding arguments.

Composability of subcommands

pydoit has a very nice solution for defining multiple tasks and/or subtasks in a single function: https://pydoit.org/tasks.html#sub-tasks. That uses the same syntax as a regular simple task, but it's defined with yield and it needs a name (and basename) field.

In this PR, the sub-task feature is used for defining the ex and sim tasks. So, instead of doit Example -b Fomu ... one can execute doit ex:Fomu .... While implementing it, I found two issues:

  • The "base" of sub-tasks will execute all the subtasks by default. So, ex will execute all of ex:Fomu, ex:OrangeCrab, ex:UPduino and ex:iCESugar. I did not find how to override ex with Example, so that executing all of them by mistake is not possible. I think there might be other use cases where overriding the "base" is desirable.
  • I could create one level of sub-tasks, but I did not find how to define further levels of hierarchy. With cobra or pyAttributes, that is natural.

Help output

Related to the previous point, the output of doit list and doit help is limited. It is useful enough for knowing which tasks exists, but it might be significantly improved compared to cobra or pyAttributes. For instance, when printing the help of ex, it would be desirable to know that it is related to all the ex:* sub-tasks.

Forwarding arguments

As a VUnit user and co-maintainer, I use VUnit scripts as an entrypoint for all my simulations and co-simulation testbenches. Therefore, a major concern I have with edalize and similar solutions is that they have some integration, but they don't honor the user's scripts or CLI commands. As a result, I need to explicitly adapt my standalone VUnit scripts in order to reuse them with those tools for synthesis, implementation, etc.

When using pydoit, it is possible to wrap existing entrypoints almost transparently: https://pydoit.org/task_args.html#arguments. That is used in this PR as doit sim:VUnit -a '--ci-mode -v', instead of path/to/vunit/run.py --ci-mode -v. That works but it is not ideal. Instead, it would be desirable to support -- for separating the args for doit from the args for the task. That's common in many tools and it avoids the issues derived from having to quote arguments inside the string.

Implementation

The main file where the doit tasks are defined is dodo.py, in the root of the repo. I suggest to open that while reading the following list:

  • .github/generate-job-matrix.py: removed and implemented as a python function, wrapped in task GenerateExamplesJobMatrix.
  • docs/Makefile: wrapped as-is in task Documentation.
    • Task DeployToGitHubPages is added, which includes commands previously written in the Documentation workflow.
  • sw/example/processor_check/check.sh: replaced with task BuildAndInstallSoftwareFrameworkTests which calls the same 4 make commands from a list of actions.
  • sw/example/processor_check/Makefile: task BuildAndInstallCheckSoftware is added for calling this makefile, instead of calling the command written in the workflow YAML files.
    • Task SetupRISCVGCC is added, instead of reaping those commands in several workflows/jobs.
  • sim/ghdl_sim.sh: wrapped as is in sub-task sim:Simple.
  • sim/run.py: wrapped in sub-tasks sim:VUnit, forwarding args through -a.
  • riscv-arch-test/run_riscv_arch_test.sh: wrapped as-is in task RunRISCVArchitectureTests.
  • setups/examples/Makefile: removed and replaced by task Example and sub-tasks ex:*. More on this below.
  • setups/osflow/common.mk: untouched, wrapped in function Run (not a task).
  • setups/quartus/**/*.tcl: untouched.
  • setups/radiant/**/*.rdf: untouched.
  • setups/vivado/**/*.tcl: untouched.

This is how the CLI looks like:

# doit list --all
BuildAndInstallCheckSoftware            Build and install Processor Check software
BuildAndInstallSoftwareFrameworkTests   Build all sw/example/*; install bootloader and processor check
DeployToGitHubPages                     Create a clean branch in subdir 'public' and push to branch 'gh-pages'
Documentation                           Run a target in subdir 'doc'
Example                                 Build an example design for a board
GenerateExamplesJobMatrix               Generate JSON of the examples, and print it as 'set-output' (for CI)
RunRISCVArchitectureTests               Run RISC-V Architecture Tests
SetupRISCVGCC                           Download and extract stnolting/riscv-gcc-prebuilt to subdir 'riscv'
ex                                      Build an example design for all the supported boards
ex:Fomu                                 Build an example design for board Fomu
ex:OrangeCrab                           Build an example design for board OrangeCrab
ex:UPduino                              Build an example design for board UPduino
ex:iCESugar                             Build an example design for board iCESugar
sim
sim:Simple                              Run simple testbench with GHDL
sim:VUnit                               Run VUnit testbench

# doit help Example
Example  Build an example design for a board
  -b ARG, --board=ARG       Name of the board 
                            choices: Fomu, OrangeCrab, UPduino, iCESugar (config: board)
  -d ARG, --design=ARG      Name of the design  (config: design)

# doit help ex:Fomu
ex:Fomu  Build an example design for board Fomu
  -d ARG, --design=ARG      Name of the design  (config: design)

Hence, except for setups/examples/Makefile, almost all other existing entrypoints were wrapped as-is. As commented above, the purpose of this PR is not to reimplement all of those entrypoints, but to showcase the flexibility of pydoit for preserving some, slightly improving others and completely rewriting a few.

As discussed in #96, my main motivation for trying this with NEORV32 now was the complexity of setups/examples/Makefile and the difficulty for potential contributors to use or try adding support for new boards. Therefore, I spent some time addressing this in a more pythonic way. I created subdir tasks:

  • project.py: three Python classes (Project, Filesets and Design), which contain the HDL sources for each design and board. This is a proof of concept. Looking into the future, I would expect this file to be blend with pyCAPI, pyIPCMI, FuseSoc, PyFPGA or any other tool which does already provide "fileset management and orchestration" features. For now, it allows users who want to add some design or support some new board in NEORV32 to mostly focus on this file only.
  • examples.py: three helper functions, which behave as a middleware between the Project class and the pydoit features/API.
    • Run: a wrapper around setups/osflow/common.mk. This illustrates that users might still decide to call the makefiles from osflow, ignoring the Python helpers. It also explains why the now removed setups/examples/Makefile had several recursive calls for setting all the parameters.
    • Example: while Run is expected to be generic for any design using NEORV32 and osflow, function Example is a wrapper around it, which generates most of the arguments from the data in the Project class, given the names of the target board and example design.
    • GenerateExamplesJobMatrix: same as the now removed .github/generate-job-matrix.py. The content it returns is hardcoded for now. However, it can be enhanced for generating it dynamically from the PRJ object.

The pydoit tasks and sub-tasks corresponding to those sources are defined in task_Example of dodo.py. There are several yield commands, all of them wrapping function Example from tasks/examples.py. The doit Example expects arguments --board and --design, while doit ex:BOARDNAME expects --design only. In both cases, positional arguments are the actual osflow make targets (e.g. doit ex:OrangeCrab -d MinimalBoot clean bit svf).

Therefore, compared to the other entrypoints wrapped as-is, task_Example and tasks/*.py illustrate how to remove one makefile (setups/examples/Makefile) and go all-in with using Python while still using makefiles under the hood.

Pythonism

Although I'm not the most proficient Python coder, I tried applying some good practices such as:

  • Using Path for dealing with locations and filenames (doit does support strings and/or Paths indeed).
  • Using type hints in functions.
  • Adding docstrings to the functions.
  • Checking for errors and raising exceptions.
  • Using Python classes.
  • Using black for formatting (with line-width set to 120).

Together with the string/struct/dictionary manipulation inherent to Python, I hope this makes the point of the benefits that Python based plumbing can provide.

Conclusion

Overall, my perception is that the Python requirement itself is the only relevant concern for adopting pydoit in the context of NEORV32 or other HDL projects involving software and open source EDA tooling. Other than that, not all the features are ideal, but it does provide a seamless solution for wrapping all the existing entrypoints non-intrusively and without giving away any capability. In fact, just wrapping entrypoints as-is does already provide benefits such as doit list, doit help, allowing moving makefiles/scripts without users noticing, type checking, error handling...
pydoit feels specially well suited for potentially using FuseSoC/edalize, PyFPGA, tsfpga, Xeda... in this project with minimal burden. While adopting any of those tools as the main solution would constraint the supported backends and the types of tasks they can handle, pydoit allows either very lightweight or tightly coupled integrations.

With regard to remote execution and/or orchestration/distribution of multiple workers in a pool, that seems not to be in the scope of pydoit. That is better supported by Apache Airflow or Google's Cloud Composer. However, that's not in the scope of NEORV32 neither. Therefore, although evalutating those might be interesting for the HDL community, we we will call it for another day 😄.

By the same token, NetworkX might provide better DAG analysis, sorting and reduction than pydoit. That might allow a more fine-grained control of the tasks to be executed, similarly to the >FNODE, FNODE> or >FNODE> syntax in dbhi/run (implemented using gonum/graph). However, since pydoit can generate graphviz diagrams, it should be possible to import those into NetworkX for analysis. Yet, this is again interesting for the HDL community, but out of scope in NEORV32 for now and for today.

Future work

If this proposal is accepted and pydoit is used in NEORV32, these are some of the possible enhancements I would like to try (not necessarily in order, and not necessarily in the short term):

  • Extend the Example and ex:* tasks for users submoduling or forking NEORV32, to add some external peripherals and then build one of the Examples with them.
  • Remove setups/osflow/fileset.mk and replace it with hardcoding the same data in tasks/project.py or, maybe, using a *.core along with reusing it in sim/run.py.
  • Create tasks GHDLAnalyse, YosysSynth, NextpnrImpl and Project*Pack, and remove setups/osflow/synthesis.mk, setups/osflow/PnR_Bit.mk and setups/osflow/tools.mk.
    • Then, consider replacing setups/osflow/boards/* and setups/osflow/constraints/* with submoduling hdl/constraints (after adding some minimal python plumbing there).
    • Then, remove common.mk.
  • Wrap batch calls to vendor tools (quartus, radiant and vivado) as pydoit tasks.
  • Remove sim/ghdl_sim.sh and replace it with a sequence of pydoit actions (or a sequence of tasks if GHDLImport, GHDLMake and GHDLSim are implemented).
  • Replace docs/Makefile with a pydoit task.
  • Add other tasks, such as:
    • "Generate a diagram with Yosys and netlistsvg"
    • "Use GHDLSynth for generating VHDL or Verilog output and then use it in Quartus/Radiant/Vivado"
    • "Formal verification with PSL and SymbiYosys"

@stnolting
Copy link
Owner

Wow! This is amazing! ❤️
I will need some time to step through this 😅

@umarcor
Copy link
Collaborator Author

umarcor commented Jul 9, 2021

I will need some time to step through this 😅

Not surprisingly, I believe that! 😄

Feel free to go step by step, do random comments without analysing it completely, or ask seemingly "stupid" questions. The modifications here are so easy and obvious. but at the same time so difficult to see where each modification is coming from. Moreover, handling #109 first, and maybe containers, will significantly simplify this PR.

@stnolting stnolting added the enhancement New feature or request label Jul 9, 2021
@umarcor umarcor force-pushed the pydoit branch 3 times, most recently from a700c21 to 988ab8f Compare July 15, 2021 06:33
@umarcor
Copy link
Collaborator Author

umarcor commented Jul 15, 2021

I rebased on top of master, after the recent enhancements.

CI is failing in this PR because several steps expect pydoit to be available in the containers. That will be fixed as soon as this PR is merged and the 'Containers' job is updated once. Alternatively, the first two commits can be cherry-picked first.

umarcor added a commit to umarcor/osvb that referenced this pull request Jul 16, 2021
@stnolting
Copy link
Owner

Thank you so much!
I'm still digging through this... 😅

However, I would like to isolate "core" and "setups" into separate repositories (I think you have proposed that already some time ago 😉). So this PR would be (mainly) something for neorv32-setups, right?

@umarcor
Copy link
Collaborator Author

umarcor commented Jul 21, 2021

Thank you so much!
I'm still digging through this... 😅

Note that each of the commits in this PR is usable. If you want, we can discuss and merge them one by one, rather that picking everything at once.

However, I would like to isolate "core" and "setups" into separate repositories (I think you have proposed that already some time ago 😉). So this PR would be (mainly) something for neorv32-setups, right?

The usage of doit is independent of having sources in one or two repositories.
doit can be used as an alternative to makefiles, particularly when you need to deal with structs (group multiple parameters together). The main benefit is providing a single entrypoint (doit list --all) for users to see all the tasks that they might execute.
In this PR, doit is used for replacing scripts/makefiles that would fall in both repos if they were split. task_SetupRISCVGCC, task_BuildAndInstallCheckSoftware, task_BuildAndInstallSoftwareFrameworkTests, task_RunRISCVArchitectureTests, task_Documentation and task_DeployToGitHubPages would belong to "core", regardless of "setups" being moved.
For instance, task_BuildAndInstallSoftwareFrameworkTests is a replacement of https://github.com/stnolting/neorv32/blob/master/sw/example/processor_check/check.sh.

In most tasks, I tried not to do substantial modifications, so it's easier for you to see the equivalency. However, if/when this is merged, I would like to do further enhancements such as dissolving docs/Makefile into doit; the pdf, html, ug-pdf and ug-html targets should be a single task with arguments. By the same token, it would be handy to show the list of software examples to the user:

$ doit list --all Doc
Doc:pdf
Doc:html
Doc:ug-pdf
Doc:ug-html
Doc:doxygen

$ doit list --all SWExample
SWExample:blink
SWExample:coremark
SWExample:freeRTOS
SWExample:pwm
...

@stnolting
Copy link
Owner

Note that each of the commits in this PR is usable. If you want, we can discuss and merge them one by one, rather that picking everything at once.

As far as I can see there is nothing substantial deleted in this PR (except for the osflow makefile), so doit is just a unified wrapper yet, right? I am ok to merge all of this PR.

In most tasks, I tried not to do substantial modifications, so it's easier for you to see the equivalency.

Thanks 😅 👍

However, if/when this is merged, I would like to do further enhancements such as dissolving docs/Makefile into doit; the pdf, html, ug-pdf and ug-html targets should be a single task with arguments. By the same token, it would be handy to show the list of software examples to the user:

I have checked out the help target and that would be very helpful, indeed.

CI is failing in this PR because several steps expect pydoit to be available in the containers. That will be fixed as soon as this PR is merged and the 'Containers' job is updated once. Alternatively, the first two commits can be cherry-picked first.

That's ok. As mentioned above, I am fine to merge all at once. I need to get familiar with all the new features anyway 😉

Thanks again for putting so much effort in this! 👍 ❤️

Copy link

@ktbarrett ktbarrett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm glad to see that you haven't used environment setup tasks. Should sim:VUnit and sim:Simple really be subtasks? I'm not so sure. The idea is that doit sim will run all subtasks, Does that make sense here since they are doing (from what I can tell) to be the same test in two different frameworks?

Overall it looks good. Great job @umarcor!

dodo.py Outdated
Comment on lines 88 to 89
# FIXME: It should we possible to use '--' for separating the args to be passed raw to the action, instead of
# requiring a param and wrapping all the args in a single string

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you wrapped doit into a custom script to pull out trailing args and forward everything to DoItMain this is easily achievable (and I've seen it done before). All tasks would have to take that "extra_args" param.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I guess that extra_args might be handled as a list and not just and string?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on your hint and on https://github.com/pydoit/doit/blob/master/doit/api.py, I implemented umarcor@558ca09. That is not perfect because EXTRA_ARGS is passed as a list to the single string shell commands. However, that is a different issue. Handling -- does work as you suggested. Thanks!

design: str,
top: str,
id: str,
board_srcs: list,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

List of what? List[str] maybe.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah... I was lazy to import the typing module, so I just wrote dumb type hints as a placeholder.
The evolution of this source will depend on the timing of the pyIPCMI/pyCLIAbstraction split. This is the reason I want to help implement GHDLAnalyze, GHDLSynth, YosysSynth and NextpnrImpl in that stack.
It will also depend on #131. If this repo is split, half of these pydoit targets would belong to the setups repo.

Comment on lines +4 to +8
class Design:
def __init__(self, name: str, vhdl: list, verilog: list = None):
self.Name = name
self.VHDL = vhdl
self.Verilog = verilog

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be a dataclass.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. However, I do not want to implement yet another dataclass myself. In this PR, I tried to implement very few and rather obvious classes for the non-Python users to most easily understand the benefits of using a language that supports object oriented programming, rather than bash/makefiles.
In practice, Design, Fileset and Project should be imported from the pyEDAProject split from pyIPCMI, from pyCAPI, from edalize, from PyFPGA... that is, anywhere except reinventing the wheel (again).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some common models for EDA tools flows is a good idea. I was investigating something similar for cocotb.

dodo.py Outdated
Comment on lines 26 to 49
yield {
"basename": "ex",
"name": None,
"doc": "Build an example design for all the supported boards",
}
for board in BOARDS:
yield {
"basename": "ex",
"name": board,
"actions": [CmdAction((Example, [], {"board": board}))],
"doc": "Build an example design for board {}".format(board),
"uptodate": [False],
"pos_arg": "posargs",
"params": [
{
"name": "design",
"short": "d",
"long": "design",
"type": str,
"default": environ.get("DESIGN", "MinimalBoot"),
"help": "Name of the design",
},
],
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a benefit to using basename instead of moving ex into task_ex?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As commented below, I was experimenting with the flexibility and constraints of pydoit tasks and subtasks.

I wanted the body of basename 'Example' to be the one in basename 'ex'. That is, I wanted to override the default behaviour of "execute all the subtasts". That's why there are three yields in the same task_Example.

In practice, you are correct, it is equivalent to have the 'ex' task and subtasks moved to a different function, and avoid 2/3 yields.

Copy link

@ktbarrett ktbarrett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whoops, double posted...

@umarcor
Copy link
Collaborator Author

umarcor commented Aug 1, 2021

Should sim:VUnit and sim:Simple really be subtasks? I'm not so sure. The idea is that doit sim will run all subtasks, Does that make sense here since they are doing (from what I can tell) to be the same test in two different frameworks?

That is some testing/learning of how pydoit works (is expected to be used). In the Example command I wanted the root/base to have a different behaviour than "execute all the subtasks". So ex:Board would be equivalent to ex -b Board. However, that seems not to be possible, so I implemented Example -b Board, and ex builds "an example for all/each of the boards".

Conversely, in the sim task I wanted to test how to disable the root/base and show the subtasks only. As you say, it does not make much sense to run sim:VUnit and sim:Simple at the same time. However, I was hoping we could have:

  • sim (hidden)
  • sim:VUnit -- vunit_args
  • sim:Simple: run all simple tests
    • sim:Simple:testname: run one simple test (i.e. one specific software).
    • sim:Simple:anothertestname
    • ...

@ktbarrett, do you know if pydoit supports more than one level of subtasks? I could not find any example about it.

@ktbarrett
Copy link

@umarcor pydoit does not support more than one level of sub-task pydoit/doit#170 (@leftink is a coworker).

To me the idea of sub-tasks is for things like regressions where there are multiple tests to run where you would want to run one, or all. Of course VUnit handles that for you already, so it may not be worthwhile. I'm not sure about the capabilities of VUnit, but if it can collect and return the test list, you can dynamically create sub-tasks based on that. That fits to me.

@umarcor
Copy link
Collaborator Author

umarcor commented Aug 1, 2021

@umarcor pydoit does not support more than one level of sub-task pydoit/doit#170 (@leftink is a coworker).

Thanks!

To me the idea of sub-tasks is for things like regressions where there are multiple tests to run where you would want to run one, or all. Of course VUnit handles that for you already, so it may not be worthwhile. I'm not sure about the capabilities of VUnit, but if it can collect and return the test list, you can dynamically create sub-tasks based on that. That fits to me.

Agree. VUnit does have a runner, and implements the functionalities internally. In fact, we don't even need to wrap the VUnit entrypoint in pydoit. In fact, that's why I wanted to support --. I just wrapped it for consistency (having all the tasks in the project listed with doit list --all).

However, as @leftink commented, I find there are other use cases where having multiple levels of hierarchy makes sense. For instance, the riscv-arch-tests have multiple test suites and each test suite has multiple tests. Similarly, in GHDL we have several levels in the test suites. Overall, it's interesting to know what grouping patterns are supported by pydoit.

@stnolting
Copy link
Owner

I am really looking forward to merging this. 👍
We should wait until #137 is resolved, then re-base and merge. It shouldn't interfere with the "classic" setup using the provided script, right? So it would be ok if certain parts are still WIP.

@umarcor
Copy link
Collaborator Author

umarcor commented Sep 18, 2021

So, this PR is now all green. However, instead of merging it all at once, I'm thinking it might be desirable to handle it one commit/change at a time. @stnolting, if you agree I can start with that procedure.

@stnolting
Copy link
Owner

Great to hear! I am really looking forward to this.

@stnolting, if you agree I can start with that procedure.

That would be nice 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants