Skip to content

Commit

Permalink
Wasmtime: refactor the pooling allocator for components
Browse files Browse the repository at this point in the history
We used to have one index allocator, an index per instance, and give out N
tables and M memories to every instance regardless how many tables and memories
they need.

Now we have an index allocator for memories and another for tables. An instance
isn't associated with a single instance, each of its memories and tables have an
index. We allocate exactly as many tables and memories as the instance actually
needs.

Ultimately, this gives us better component support, where a component instance
might have varying numbers of internal tables and memories.

Additionally, you can now limit the number of tables, memories, and core
instances a single component can allocate from the pooling allocator, even if
there is the capacity for that many available. This is to give embedders tools
to limit individual component instances and prevent them from hogging too much
of the pooling allocator's resources.
  • Loading branch information
fitzgen committed Aug 11, 2023
1 parent b0e31f5 commit 7a3070e
Show file tree
Hide file tree
Showing 31 changed files with 2,530 additions and 1,442 deletions.
45 changes: 45 additions & 0 deletions RELEASES.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,51 @@ Unreleased.

### Changed

* The pooling allocator was significantly refactored and the
`PoolingAllocationConfig` has some minor breaking API changes that reflect
those changes.

Previously, the pooling allocator had `count` slots, and each slot had `N`
memories and `M` tables. Every allocated instance would reserve those `N`
memories and `M` tables regardless whether it actually needed them all or
not. This could lead to some waste and over-allocation when a module used less
memories and tables than the pooling allocator's configured maximums.

After the refactors in this release, the pooling allocator doesn't have
one-size-fits-all slots anymore. Instead, memories and tables are in separate
pools that can be allocated from independently, and we allocate exactly as
many memories and tables as are necessary for the instance being allocated.

To preserve your old configuration with the new methods you can do the following:

```rust
let mut config = PoolingAllocationConfig::default();

// If you used to have this old, no-longer-compiling configuration:
config.count(count);
config.instance_memories(n);
config.instance_tables(m);

// You can use these equivalent settings for the new config methods:
config.total_core_instances(count);
config.total_stacks(count); // If using the `async` feature.
config.total_memories(count * n);
config.max_memories_per_module(n);
config.total_tables(count * m);
config.max_tables_per_module(m);
```

There are additionally a variety of methods to limit the maximum amount of
resources a single core Wasm or component instance can take from the pool:

* `PoolingAllocationConfig::max_memories_per_module`
* `PoolingAllocationConfig::max_tables_per_module`
* `PoolingAllocationConfig::max_memories_per_component`
* `PoolingAllocationConfig::max_tables_per_component`
* `PoolingAllocationConfig::max_core_instances_per_component`

These methods do not affect the size of the pre-allocated pool.

* Options to the `wasmtime` CLI for Wasmtime itself must now come before the
WebAssembly module. For example `wasmtime run foo.wasm --disable-cache` now
must be specified as `wasmtime run --disable-cache foo.wasm`. Any
Expand Down
2 changes: 1 addition & 1 deletion benches/instantiation.rs
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ fn strategies() -> impl Iterator<Item = InstanceAllocationStrategy> {
InstanceAllocationStrategy::OnDemand,
InstanceAllocationStrategy::Pooling({
let mut config = PoolingAllocationConfig::default();
config.instance_memory_pages(10_000);
config.memory_pages(10_000);
config
}),
]
Expand Down
40 changes: 20 additions & 20 deletions crates/fuzzing/src/generators/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -73,13 +73,13 @@ impl Config {
// If using the pooling allocator, update the instance limits too
if let InstanceAllocationStrategy::Pooling(pooling) = &mut self.wasmtime.strategy {
// One single-page memory
pooling.instance_memories = config.max_memories as u32;
pooling.instance_memory_pages = 10;
pooling.total_memories = config.max_memories as u32;
pooling.memory_pages = 10;

pooling.instance_tables = config.max_tables as u32;
pooling.instance_table_elements = 1_000;
pooling.total_tables = config.max_tables as u32;
pooling.table_elements = 1_000;

pooling.instance_size = 1_000_000;
pooling.core_instance_size = 1_000_000;
}
}

Expand Down Expand Up @@ -126,12 +126,12 @@ impl Config {
if let InstanceAllocationStrategy::Pooling(pooling) = &self.wasmtime.strategy {
// Check to see if any item limit is less than the required
// threshold to execute the spec tests.
if pooling.instance_memories < 1
|| pooling.instance_tables < 5
|| pooling.instance_table_elements < 1_000
|| pooling.instance_memory_pages < 900
|| pooling.instance_count < 500
|| pooling.instance_size < 64 * 1024
if pooling.total_memories < 1
|| pooling.total_tables < 5
|| pooling.table_elements < 1_000
|| pooling.memory_pages < 900
|| pooling.total_core_instances < 500
|| pooling.core_instance_size < 64 * 1024
{
return false;
}
Expand Down Expand Up @@ -333,23 +333,23 @@ impl<'a> Arbitrary<'a> for Config {

// Ensure the pooling allocator can support the maximal size of
// memory, picking the smaller of the two to win.
if cfg.max_memory_pages < pooling.instance_memory_pages {
pooling.instance_memory_pages = cfg.max_memory_pages;
if cfg.max_memory_pages < pooling.memory_pages {
pooling.memory_pages = cfg.max_memory_pages;
} else {
cfg.max_memory_pages = pooling.instance_memory_pages;
cfg.max_memory_pages = pooling.memory_pages;
}

// If traps are disallowed then memories must have at least one page
// of memory so if we still are only allowing 0 pages of memory then
// increase that to one here.
if cfg.disallow_traps {
if pooling.instance_memory_pages == 0 {
pooling.instance_memory_pages = 1;
if pooling.memory_pages == 0 {
pooling.memory_pages = 1;
cfg.max_memory_pages = 1;
}
// .. additionally update tables
if pooling.instance_table_elements == 0 {
pooling.instance_table_elements = 1;
if pooling.table_elements == 0 {
pooling.table_elements = 1;
}
}

Expand All @@ -366,8 +366,8 @@ impl<'a> Arbitrary<'a> for Config {

// Force this pooling allocator to always be able to accommodate the
// module that may be generated.
pooling.instance_memories = cfg.max_memories as u32;
pooling.instance_tables = cfg.max_tables as u32;
pooling.total_memories = cfg.max_memories as u32;
pooling.total_tables = cfg.max_tables as u32;
}

Ok(config)
Expand Down
104 changes: 73 additions & 31 deletions crates/fuzzing/src/generators/pooling_config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,62 +6,104 @@ use arbitrary::{Arbitrary, Unstructured};
#[derive(Debug, Clone, Eq, PartialEq, Hash)]
#[allow(missing_docs)]
pub struct PoolingAllocationConfig {
pub total_component_instances: u32,
pub total_core_instances: u32,
pub total_memories: u32,
pub total_tables: u32,
pub total_stacks: u32,

pub memory_pages: u64,
pub table_elements: u32,

pub component_instance_size: usize,
pub max_memories_per_component: u32,
pub max_tables_per_component: u32,

pub core_instance_size: usize,
pub max_memories_per_module: u32,
pub max_tables_per_module: u32,

pub table_keep_resident: usize,
pub linear_memory_keep_resident: usize,

pub max_unused_warm_slots: u32,
pub instance_count: u32,
pub instance_memories: u32,
pub instance_tables: u32,
pub instance_memory_pages: u64,
pub instance_table_elements: u32,
pub instance_size: usize,

pub async_stack_zeroing: bool,
pub async_stack_keep_resident: usize,
pub linear_memory_keep_resident: usize,
pub table_keep_resident: usize,
}

impl PoolingAllocationConfig {
/// Convert the generated limits to Wasmtime limits.
pub fn to_wasmtime(&self) -> wasmtime::PoolingAllocationConfig {
let mut cfg = wasmtime::PoolingAllocationConfig::default();

cfg.max_unused_warm_slots(self.max_unused_warm_slots)
.instance_count(self.instance_count)
.instance_memories(self.instance_memories)
.instance_tables(self.instance_tables)
.instance_memory_pages(self.instance_memory_pages)
.instance_table_elements(self.instance_table_elements)
.instance_size(self.instance_size)
.async_stack_zeroing(self.async_stack_zeroing)
.async_stack_keep_resident(self.async_stack_keep_resident)
.linear_memory_keep_resident(self.linear_memory_keep_resident)
.table_keep_resident(self.table_keep_resident);
cfg.total_component_instances(self.total_component_instances);
cfg.total_core_instances(self.total_core_instances);
cfg.total_memories(self.total_memories);
cfg.total_tables(self.total_tables);
cfg.total_stacks(self.total_stacks);

cfg.memory_pages(self.memory_pages);
cfg.table_elements(self.table_elements);

cfg.component_instance_size(self.component_instance_size);
cfg.max_memories_per_component(self.max_memories_per_component);
cfg.max_tables_per_component(self.max_tables_per_component);

cfg.core_instance_size(self.core_instance_size);
cfg.max_memories_per_module(self.max_memories_per_module);
cfg.max_tables_per_module(self.max_tables_per_module);

cfg.table_keep_resident(self.table_keep_resident);
cfg.linear_memory_keep_resident(self.linear_memory_keep_resident);

cfg.max_unused_warm_slots(self.max_unused_warm_slots);

cfg.async_stack_zeroing(self.async_stack_zeroing);
cfg.async_stack_keep_resident(self.async_stack_keep_resident);

cfg
}
}

impl<'a> Arbitrary<'a> for PoolingAllocationConfig {
fn arbitrary(u: &mut Unstructured<'a>) -> arbitrary::Result<Self> {
const MAX_COUNT: u32 = 100;
const MAX_TABLES: u32 = 10;
const MAX_MEMORIES: u32 = 10;
const MAX_TABLES: u32 = 100;
const MAX_MEMORIES: u32 = 100;
const MAX_ELEMENTS: u32 = 1000;
const MAX_MEMORY_PAGES: u64 = 160; // 10 MiB
const MAX_SIZE: usize = 1 << 20; // 1 MiB
const MAX_INSTANCE_MEMORIES: u32 = 10;
const MAX_INSTANCE_TABLES: u32 = 10;

let instance_count = u.int_in_range(1..=MAX_COUNT)?;
let total_memories = u.int_in_range(0..=MAX_MEMORIES)?;

Ok(Self {
max_unused_warm_slots: u.int_in_range(0..=instance_count + 10)?,
instance_tables: u.int_in_range(0..=MAX_TABLES)?,
instance_memories: u.int_in_range(0..=MAX_MEMORIES)?,
instance_table_elements: u.int_in_range(0..=MAX_ELEMENTS)?,
instance_memory_pages: u.int_in_range(0..=MAX_MEMORY_PAGES)?,
instance_count,
instance_size: u.int_in_range(0..=MAX_SIZE)?,
total_component_instances: u.int_in_range(1..=MAX_COUNT)?,
total_core_instances: u.int_in_range(1..=MAX_COUNT)?,
total_memories,
total_tables: u.int_in_range(0..=MAX_TABLES)?,
total_stacks: u.int_in_range(0..=MAX_COUNT)?,

memory_pages: u.int_in_range(0..=MAX_MEMORY_PAGES)?,
table_elements: u.int_in_range(0..=MAX_ELEMENTS)?,

component_instance_size: u.int_in_range(0..=MAX_SIZE)?,
max_memories_per_component: u.int_in_range(0..=MAX_INSTANCE_MEMORIES)?,
max_tables_per_component: u.int_in_range(0..=MAX_INSTANCE_TABLES)?,

core_instance_size: u.int_in_range(0..=MAX_SIZE)?,
max_memories_per_module: u.int_in_range(0..=MAX_INSTANCE_MEMORIES)?,
max_tables_per_module: u.int_in_range(0..=MAX_INSTANCE_TABLES)?,

table_keep_resident: u.int_in_range(0..=1 << 20)?,
linear_memory_keep_resident: u.int_in_range(0..=1 << 20)?,

max_unused_warm_slots: u.int_in_range(0..=total_memories + 10)?,

async_stack_zeroing: u.arbitrary()?,
async_stack_keep_resident: u.int_in_range(0..=1 << 20)?,
linear_memory_keep_resident: u.int_in_range(0..=1 << 20)?,
table_keep_resident: u.int_in_range(0..=1 << 20)?,
})
}
}
2 changes: 1 addition & 1 deletion crates/jit/src/instantiate.rs
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ impl CompiledFunctionInfo {
#[derive(Serialize, Deserialize)]
pub struct CompiledModuleInfo {
/// Type information about the compiled WebAssembly module.
module: Module,
pub module: Module,

/// Metadata about each compiled function.
funcs: PrimaryMap<DefinedFuncIndex, CompiledFunctionInfo>,
Expand Down
Loading

0 comments on commit 7a3070e

Please sign in to comment.