-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tasks: Add support for task context switching #50
Conversation
The current X86Regs structure is only used by exception handlers and interrupt routines and as such contains the interrupt routine stack frame within the structure. This patch separates that out to break registers and other context into their respective categories to allow the structures to be reused in other situations. Signed-off-by: Roy Hopkins <rhopkins@suse.de>
2c10720
to
f441d84
Compare
I've just pushed V2 of this draft PR which has some significant changes to the previous version. In particular I wanted to address the use of a self-implemented linked list, as commented on above and also add the ability to support more advanced scheduling from the outset. So I've made the following big changes:
The memory management/borrowing/reference counting in this implementation was particularly tricky to get right. I'd appreciate it if reviewers could pay particular attention to this area of task ownership. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general this looks good to me. The task switching code is a good start and the beginnings of per-task kernel memory are also there. As one of the next steps we probably need an address allocator to manage the kernel task dynamic memory and the user-space address layout as well. I will look into this going forward.
I haven't started thinking about user-space address layout at the moment but I have used the current bitmap allocator to implement a per task kernel address allocator in my WIP PR for module loading in my own fork. roy-hopkins#1. |
src/task/task.rs
Outdated
Ok(task) | ||
} | ||
|
||
pub fn set_current(&mut self, previous_task: Option<*mut Task>) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As before, Option<*mut Task>
could perhaps be switched to Option<NonNull<Task>>
, since we use None
to signify null.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, does it need to be *mut
? Could it be *const
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. In fact, I've just working on this and have changed this to use a raw pointer instead as the null pointer is handled in the assembly code in set_current()
.
src/task/schedule.rs
Outdated
// Restrict the scope of the mutable borrow below otherwise when the task context | ||
// is switched via set_current() the borrow remains in scope. | ||
let task_ptr = { | ||
let mut task = task_node.task_mut(); | ||
if task.state != TaskState::RUNNING { | ||
panic!("Attempt to launch a non-running initial task"); | ||
} | ||
this_cpu_mut().current_task = Some(task_node.clone()); | ||
task.state = TaskState::SCHEDULED; | ||
task.as_mut() as *mut Task | ||
}; | ||
unsafe { (*task_ptr).set_current(None) }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a huge fan of this unsafe, I wonder if we could remove it. If Task::set_current()
took &self
instead of &mut self
we could avoid it I think. There are two things that prevent this:
- This line in
Task::set_current()
:
let new_task_addr = (self as *mut Task) as u64;
But perhaps this could be *const
?
- The fact that
TaskRuntime::schedule_in()
takes&mut self
(and thereforeCountRuntime::schedule_in()
. If we could make the trait signature take&self
that would be great for this. ForCountRuntime
this is trivial, we simply need to use anAtomicU64
instead of anu64
:
pub struct CountRuntime {
count: AtomicU64,
}
impl TaskRuntime for CountRuntime {
fn schedule_in(&self) {
self.count.fetch_add(1, Ordering::SeqCst);
}
...
}
I'm unsure if we could use a more relaxed ordering though. For TscRuntime
we could do the same via AtomicU64::store()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a tricky one. Certainly 2. is easy - in fact I've just moved TaskRuntime::schedule_in()
from this function in the the scheduler itself, so that solves that.
- Is a bit harder as
set_current()
does indeed mutate the task in that the contents will certainly have changed by the time the function returns, even if it happens in the context of another task.
The bigger problem though is that the task structure itself uses runtime borrow checking. Therefore to call it without a pointer means having an active borrow which remains in scope until the task context is resumed. This prevents the scheduler (and the gdbstub) from being able to access the task.
Signed-off-by: Roy Hopkins <rhopkins@suse.de>
The svsm_paging module was referenced only in svsm.rs. This module will need to be used in the task management code which is about to be introduced so the module reference has been added into lib.rs. Signed-off-by: Roy Hopkins <rhopkins@suse.de>
Adds a new function that can be called to read the 64-bit tsc value from the current CPU. Signed-off-by: Roy Hopkins <rhopkins@suse.de>
f441d84
to
ab0f6a6
Compare
This adds basic support for creating tasks which equate to processes or threads that can be scheduled on a vCPU. Each task is represented by a structure that contains the relevant context, which at the moment consists of a task stack, pagetable and stack register. Tasks are added to a task list in TASKS which is implemented as an RB tree which is keyed on the priority of the task: the lower the key, the more likely the task will be switched to. There are two methods for setting the priority; one based on CPU time using rdtsc and the other (the default) the amount of times a task has been scheduled. This patch adds all the infrastructure to support this but does not yet integrate it into the boot processor or secondary APs, so current functionality is unchanged. Signed-off-by: Roy Hopkins <rhopkins@suse.de>
In order to bootstrap task support on the boot CPU, the entry point is created as the initial task on the CPU and added to the task list with an affinity that ensures it only ever runs on the boot CPU. This patch enables future tasks to be created that can be cooperatively switched using a call to schedule() on the boot CPU. Signed-off-by: Roy Hopkins <rhopkins@suse.de>
This change uses the task list to create a new initial task for each AP that bootstraps task support on vCPUs by launching the entry point rather than jumping directly into the request loop. The intial task has an affinity for the specific AP to prevent it from being scheduled on any other CPU. Signed-off-by: Roy Hopkins <rhopkins@suse.de>
This patch makes the gdbstub aware of the task list in SVSM allowing the developer to inspect the state and call stack of any stopped threads regardless of which vCPU the task is running on. Each task has its own pagetable so the gdbstub now uses its own stack and switches to the task pagetable whenever a task context is queried or modified. Signed-off-by: Roy Hopkins <rhopkins@suse.de>
ab0f6a6
to
8698c17
Compare
Signed-off-by: Roy Hopkins <rhopkins@suse.de>
Thanks to the people that looked at and reviewed this. I'll close this draft now and raise new PRs based on this work. |
Draft PR for review and comments.
Implements a mechanism to switch the current CPU between different task contexts that include per-task pagetables and will in the future support per-task virtual memory mapping.
The current version creates a global task list and launches an initial task on each virtual CPU. Additional tasks can be created and added to the task list which will then be activated by a subsequent call to schedule(). This has been tested locally but is not currently called within this PR.