VM Pools
Pre-warm a pool of identical VMs for instant access when you need them.
Why VM Pools?
Virtual machine boot takes seconds. For interactive use cases or test suites that spin up many VMs, this latency adds up quickly. VM pools solve this by pre-creating VMs that are already booted and waiting.
Common use cases:
- Test suites - Run hundreds of tests without waiting for boot each time
- Batch processing - Process jobs with minimal latency between items
- Interactive tools - Provide instant VM access to users
- CI/CD pipelines - Reduce overall pipeline duration
Creating a Pool
Use Capsa::pool() instead of Capsa::vm(), then call .build(size) with the number of VMs to pre-warm:
use capsa::Capsa;
let pool = Capsa::pool(LinuxVmConfig::new(kernel, disk))
.cpus(2)
.memory_mb(512)
.console_enabled()
.build(4) // Pre-create 4 identical VMs
.await?;All VMs in the pool share the same configuration. The pool begins creating VMs immediately and returns once all are ready.
Reserving a VM
Two methods for acquiring a VM from the pool:
Blocking Reserve
Waits until a VM becomes available:
let vm = pool.reserve().await?;Non-blocking Reserve
Returns immediately, errors if no VMs are available:
match pool.try_reserve() {
Ok(vm) => {
// Got a VM
}
Err(_) => {
// Pool is empty, handle accordingly
}
}Both methods return a PooledVm that dereferences to VmHandle.
Using a Pooled VM
A PooledVm provides the same API as a regular VmHandle. The VM is already booted and ready to use:
let vm = pool.reserve().await?;
// Same API as VmHandle
let console = vm.console().await?;
console.send("echo hello\n").await?;
let status = vm.status().await?;No need to wait for boot or check readiness. The VM is fully operational when you receive it.
Releasing a VM
Simply drop the PooledVm when finished:
{
let vm = pool.reserve().await?;
// Use the VM...
} // VM is released here
// Or explicitly
let vm = pool.reserve().await?;
// ...
drop(vm);When released, the VM is killed and the pool automatically starts a replacement. This maintains the pool size over time without manual intervention.
TIP
Release VMs as soon as you are done with them. Holding onto reserved VMs blocks other tasks waiting on reserve().
Checking Pool Status
Query how many VMs are currently available:
let count = pool.available_count().await;
println!("{} VMs ready", count);Note that this count may change between checking and reserving. Another task might reserve a VM in between. Use try_reserve() if you need to handle empty pools gracefully.
Thread Safety
VmPool is Send + Sync, making it safe to share across threads and tasks. Wrap it in Arc for multi-task usage:
use std::sync::Arc;
let pool = Arc::new(
Capsa::pool(config)
.cpus(2)
.memory_mb(512)
.build(4)
.await?
);
// Share across multiple tasks
for i in 0..10 {
let pool = pool.clone();
tokio::spawn(async move {
let vm = pool.reserve().await?;
// Each task gets its own VM
Ok::<_, capsa::Error>(())
});
}Multiple tasks can call reserve() concurrently. The pool handles synchronization internally.
Complete Example
A test runner that executes tests in parallel using a VM pool:
use capsa::{Capsa, LinuxVmConfig};
use std::sync::Arc;
async fn run_tests(tests: Vec<String>) -> Result<(), capsa::Error> {
let config = LinuxVmConfig::new("./kernel", "./disk.img");
// Create pool sized for parallelism
let pool = Arc::new(
Capsa::pool(config)
.cpus(2)
.memory_mb(512)
.console_enabled()
.build(4)
.await?
);
let mut handles = vec![];
for test in tests {
let pool = pool.clone();
let handle = tokio::spawn(async move {
// Wait for available VM
let vm = pool.reserve().await?;
let console = vm.console().await?;
// Run test
console.send(&format!("./run-test {}\n", test)).await?;
let output = console.read_until("TEST_DONE").await?;
// VM released automatically when dropped
Ok::<_, capsa::Error>(output)
});
handles.push(handle);
}
// Collect results
for handle in handles {
let result = handle.await??;
println!("{}", result);
}
Ok(())
}Best Practices
Size pools based on expected concurrency. If you run 8 parallel test workers, a pool of 8 VMs ensures no waiting. More VMs than workers wastes memory; fewer causes queuing.
Account for memory usage. Each VM in the pool consumes its configured memory. A pool of 8 VMs with 1GB each uses 8GB of host RAM. Size accordingly.
Release promptly. The longer you hold a PooledVm, the longer other tasks wait. Acquire late, release early.
Handle pool exhaustion. For latency-sensitive code, use try_reserve() and fall back gracefully rather than blocking indefinitely.
Pool Sizing
A good starting point: set pool size equal to your task parallelism. Monitor available_count() during operation. If it stays at zero, increase pool size. If it stays high, reduce to save memory.