-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add draft ros-security feedback #1
base: master
Are you sure you want to change the base?
Changes from 2 commits
8a554de
6bc3c04
86039ad
740a0f4
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,44 @@ | ||||||||||||||
## [ROS-Security] Feedback to REP-2004 Package quality categories | ||||||||||||||
|
||||||||||||||
### General remarks | ||||||||||||||
|
||||||||||||||
- How does a user consume the fact that a given package is in a given category without undue burden? Forcing the user to check a list maintained on a wiki requires them to have 100% knowledge of every dependency of every package that they may have just `apt install`ed. This could (and should) be scripted, but that still puts the onus of checking on the user. Could something similar to ubuntu’s repository components (main, restricted, universe…) be used? Then they actually need to opt-in to actually installing software of a given quality level. | ||||||||||||||
mikaelarguedas marked this conversation as resolved.
Show resolved
Hide resolved
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I already kinda raised that point on my own, so let's rephrase a little assuming we like the addition above (or something like it):
Suggested change
|
||||||||||||||
- Lack of granularity in ‘quality level’ | ||||||||||||||
While having a few 'quality levels' is great to get a very quick overview of a package quality, it may also be misleading, masking the granularity of reality. A package that excels in all metrics but performs miserably in a single one (for any reason) may be ranked in one of the lower grades thus not encouraging its use. In reality the package may only need a little external help to get a full score. | ||||||||||||||
An example of this is a package not available on one of the Tier1 platforms, thus not qualifying to be quality level < 4. But being production ready and complying with every other criterium to be level 1. | ||||||||||||||
We think adding a full 'report card' could be useful to assess how a package is failing to meet criteria for the higher quality level. For this 'report card' to be useful, it should be quickly and easily readable. We could consider a 'spider chart' (or [radar chart as per wikipedia](https://en.wikipedia.org/wiki/Radar_chart)), or a table like [here](https://github.com/ros-infrastructure/rep/blob/c8bd456f531acc4863d0bb8888c28b10f9271492/rep-2004.rst#quality-level-comparison-chart). | ||||||||||||||
|
||||||||||||||
### Security specific context and feedback | ||||||||||||||
|
||||||||||||||
#### Quality Levels and the Vulnerability Disclosure Policy | ||||||||||||||
The VDP tells the public we’d like a flat 90-day grace period between when we receive a vulnerability before it’s made public. In the interest of keeping the VDP simple, I recommend keeping code quality out of this public-facing document. We can--and should--strive to do better for high quality code, but I don’t believe that needs to be a part of this document. | ||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This section doesn't directly involve REP 2004, right? I mean, no one reading this on the PR will do anything as a result, so is it useful information to include in our feedback? |
||||||||||||||
|
||||||||||||||
#### Quality Levels and Vulnerability Remediation | ||||||||||||||
Internally, however, we should set a higher standard for higher quality code. | ||||||||||||||
Level 1 and 2 code will be widely deployed in the field. Fixes should be made available this code quickly to protect production robots. | ||||||||||||||
Level 3 and some level 4 code may not be deployed in robots, but will be used daily by firms that build robots. High risk vulnerabilities should be handled quickly to protect development environments. | ||||||||||||||
CVSS is the defacto standard for risk assessing vulnerabilities. Although flawed, it’s better than trying to create a new standard. | ||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Some improvements on top of CVSS were proposed by our group a while ago. They built on top of CVSS 3.0. See https://arxiv.org/pdf/1807.10357.pdf for more context (open source implementation of the scoring mechanism here).
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
@vmayoral, really appreciate the work you've done with RVSS but I'm not sure it should be referenced here--in fact, this probably should not even admit CVSS is flawed because it begs for a solution. This doc and the VDP need a triage workflow which is where I believe this should be captured. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hello @SidFaber, Well, CVSS is flawed for robotics (though I believe your suggestion on relaxing the wording is fair), we should leave some reference to it if we want to give fair feedback.
I'm clearly biased here but could you please elaborate why you think it's not appropriate? Without further argumentation, I respectfully disagree. RVSS was exactly created for this purpose, for the robotics purpose, and to give FIRST some input of what we need. Though many of the metrics are (for sure) improvable, we should favour the discussion on better metrics. Myself and my team putting effort into opening this up favoured this view, to try involving others on the importance of flaw severity. In addition, It feels we're going in that direction by proposing disclosures according to severity categories. Also, kindly note that both the article as well as the source code of RVSS are open for anyone to tinker and/or improve. It should (hopefully will) evolve with this working group over time. CVSS is clearly being favoured (and should!) in the text so I'd insist we consider this few bits in addition. Can we maybe reach a compromise by emphasizing more CVSS? Let me know if I can help adding something else to my proposal. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Because the effort needs to succeed. The health care industry has tried something similar through Mitre and the FDA. The Industrial Control Systems community created IVSS. Neither have gained significant adoption because they're not mandated. Organizations remain entrenched in CVSS not because it's great, but because they have no choice--for instance, it's it's written into PCI and it's required for US Government Agencies. Let's use the RVSS work to drive more widespread change, partner with other communities like healthcare and ICS.
Absolutely, I don't think we have a choice. We'll need to use CVSS for the foreseeable future, particularly as robots become tightly integrated with corporate networks. We can use this doc and the REP to start making the broader ROS community familiar with vulnerability handling. That's why I didn't want the first introduction to CVSS to be a negative impression. Here's something a bit stronger, feel free to add:
Suggested change
At the same time I think it's wise for the Security WG to use RVSS to influence broader change on how vulnerabilities are prioritized and patched. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Happy to hear that!
Interesting thoughts. I agree with the fact that it takes lots of effort to influence FIST (I certainly have done my share of bizdev already with them) however for that reason itself, we should inform the ROS community about the fact that we're working on improvements on top, RVSS being a first iteration. I do not share the fact that we should remove RVSS from our original recommendations completely. I do agree though that CVSS needs to be priorized to favour compatibility and interoperability with other security communities. From your text above, we seem to agree that RVSS should be used in robotics. Let's please reflect it that way, encouraging the robotics community to be aware of the limitations of CVSS and the need of taking into account additional matters (e.g. safety, time-until-mitigation, etc.). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've gone ahead and opened a ticket for RVSS's inclusion in our "supported projects" ros-security/community#5 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's good discussion, as one of the authors of the original paper and idea, I'd be happy to be personally involved in leveraging on the community here. A very first round could be requesting feedback from the roboticist community and having an agenda/discussion on potential improvements. This effort should lead us to a RVSS version 2.0 which is precisely how CVSS historically evolved (now in version 3) to cover progressively more aspects, to be better and more consistent and a widely use generalistic Scoring system that serves all (but it is highly critizable from each domain). RVSS should be mantained as a robotics contextualization of CVSS, IMO. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We're trying to provide actionable feedback here using industry standards. Today, that's CVSS.
I agree, the edit proposed in @SidFaber's earliest comment seems the best path forward for now. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hi @kyrofa, I'm sorry but I can't help get this feeling that again you are blocking our contributions. We got the same feeling in the VDP where much of our input has been discarded and currently, reflects a vendor-biased view (yours). Very problematic IMHO and we will need to deal with this a posteriori. Not the best way to start IMHO. CVSS is not the only industry standard. In fact, it's not even a standard. Moreover speaking of "CVSS" itself isn't enough. Which version? Why and what are the problems behind it? Have you/we considered this? Your proposal is to discard prior work that tries to tackle the problems and reasons about it.
And why exactly RVSS isn't actionable? It's been researched and adapted from CVSS3.0 into robotics. By both experienced roboticists and security engineers who together tackled the problem landscape and proposed a better scoring system. It's fully open, actionable and ready its use (it's actually been already put in use!). Moreover there's research showing how CVSS3.0 provides fun output. If this is to represent the complete Security WG we kindly ask to include a reference to RVSS. Very happy to play with the wording if that's your concern. If our input isn't being considered and you plan on resolving this without taking us in consideration, I'm more than happy to refrain from similar contributions in the future, disengage and head to the source directly. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not trying to block anything, and I'm not trying to provide feedback regarding RVSS either. I'm merely talking about the purpose of what we're writing, here. The feedback we're putting together is less useful when we immediately undermine it in the following phrase, either by saying that the thing we're suggesting isn't ideal, or by mentioning various alternatives. Nothing is perfect, but it seems no one is actually arguing against using CVSS in the table below, so I suggest we simply provide that recommendation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @kyrofa, no we're not doing that. I reckon that that's what we're trying though and I've committed my bits in that direction already. First by providing input directly on that REP on quality-related aspects (influenced by security practices) and second, by reviewing the PR mikaelarguedas put together pointing out more security-related matters. Security-wise (which is the PR we're arguing about) insisting on discarding improvements on a line where members of the Security WG have already dedicated resources and opened improvents is by no means undermining our feedback. We disagree on this and I understand your comment as as a hurdle for collaborations. If as @mikaelarguedas says in here we decide we want to align better with REP-2004 and not focus on giving a security viewpoint but insist on what that REP focuses and demands, we should probably proceed in the direction @mikaelarguedas suggests. That said, and without disagreeing with @mikaelarguedas, I would encourage all of us to consider that we do represent the Security WG and should try to provide a security viewpoint. There's an inherent connection between quality and security. Specially for what concerns testing. My line of thinking is best understood with the following text I'm writing atm and which touches into this thin-line between quality and security:
Footnotes
|
||||||||||||||
|
||||||||||||||
#### Remediation Timeline | ||||||||||||||
Here’s an initial proposal on responsiveness based on the principles above. This would be a commitment by the ROS 2 developer community. | ||||||||||||||
Triage within 7 natural days (one week) for all reports, triage assigns a CVSS score and a maintainer to provide a fix | ||||||||||||||
Maintainer is expected to provide a fix according to this timeline; the maintainer may also change the CVSS score as they dig into the issue: | ||||||||||||||
|
||||||||||||||
|
||||||||||||||
| | Critical/High Risk<br> CVSS 7.0+ | Medium Risk<br> CVSS 4.0-6.9 | Low Risk<br>CVSS <3.9 | | ||||||||||||||
| ----------------- | --------- | ---------------------- | ------------------------------ | | ||||||||||||||
| Quality Level 1 | 7 days | 30 days | No timeline, use issue | | ||||||||||||||
| Quality Level 2 | 7 days | 30 days | No timeline, use issue | | ||||||||||||||
| Quality Level 3 | 30 days | No timeline, use issue | No timeline, use issue | | ||||||||||||||
| Quality Level 4-5 | 30 days | No timeline, use issue | No timeline, use issue | | ||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I doubt levels 4-5 would ever see a fix, and there's no lower category to fall to, which means specifying this doesn't really mean anything. Should we remove it? |
||||||||||||||
|
||||||||||||||
|
||||||||||||||
|
||||||||||||||
Once the maintainer has approved the code update, the code is built and distributed according to the normal timeline (next sync for that ROS distribution). | ||||||||||||||
|
||||||||||||||
|
||||||||||||||
## Suggestions outside of REP but linked to it: | ||||||||||||||
|
||||||||||||||
- Consider providing tooling to help people evaluate / generate the quality level and files associated with quality level (could go from a simple list of Y/N questions to actually pulling info of all the dependencies etc) | ||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What exactly do you have in mind here, something like https://github.com/aliasrobotics/RSF particularized for ROS software development? We should probably be a bit more specific. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We were (@artivis to confirm as it was his suggestion) hoping to reduce the amount of work to achieve the quality declaration expected for various packages. While having methodologies similar to RSF tailored to ROS is definitely a great thing in the long term, it seems hard to fit in this REP that is (purposefully) vague. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The overall idea is to provide small tools to help both a developer to estimate its package quality level and package consumers to assert that the quality level claimed is true. The former could be a simple CLI-based y/n form while the later would probably be more difficult to set up given the wide range of stuff covered by the quality level... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I don't think that we do have to, it wont make it in the REP (hence the "outside of REP" section) this is just to spark discussion on what type of tooling could make this REP easier to adopt. |
||||||||||||||
- How does this connect with REP-2005 ? How would one go about filing vulnerabilities in e.g. Fast-RTPS? | ||||||||||||||
- Provide recommendation on linters checking for quality / security issues (see https://github.com/ros-infrastructure/rep/pull/218#discussion_r370583550) (TODO by Security WG) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.