VM Pools
Pre-warm a pool of identical VMs for instant access when you need them.
Why VM Pools?
Virtual machine boot takes seconds. For test suites or batch processing that spin up many VMs, this latency adds up. VM pools solve this by pre-creating booted VMs that are ready to use.
Common use cases:
- Test suites - Run hundreds of tests without waiting for boot
- Batch processing - Process jobs with minimal latency
- Interactive tools - Provide instant VM access
- CI/CD pipelines - Reduce overall pipeline duration
Creating a Pool
Use capsa::vm_pool() with the number of VMs to pre-warm:
let pool = capsa::vm_pool(LinuxDirectBoot::new("./kernel", "./initrd"))
.cpus(2)
.memory_mb(512)
.console_enabled()
.build(4) // Pre-create 4 identical VMs
.await?;All VMs in the pool share the same configuration. The pool returns once all VMs are ready.
Reserving a VM
Blocking Reserve
Waits until a VM becomes available:
let vm = pool.reserve().await?;
// VM is ready to useNon-blocking Reserve
Returns immediately, errors if no VMs are available:
match pool.try_reserve() {
Ok(vm) => {
// Got a VM
}
Err(_) => {
// Pool is empty
}
}Using a Pooled VM
A PooledVm provides the same API as VmHandle. The VM is already booted:
let vm = pool.reserve().await?;
// VM is ready - no boot wait needed
let console = vm.console().await?;
console.write_line("echo hello").await?;Releasing a VM
Drop the PooledVm when finished:
{
let vm = pool.reserve().await?;
// Use the VM...
} // VM is released here
// Or explicitly
let vm = pool.reserve().await?;
drop(vm);When released, the VM is killed and the pool starts a replacement automatically.
Checking Pool Status
Query available VMs:
let count = pool.available_count().await;
println!("{} VMs ready", count);Thread Safety
VmPool is Send + Sync. Wrap in Arc for multi-task usage:
use std::sync::Arc;
let pool = Arc::new(
capsa::vm_pool(LinuxDirectBoot::new("./kernel", "./initrd"))
.cpus(2)
.memory_mb(512)
.build(4)
.await?
);
// Share across tasks
for _ in 0..10 {
let pool = pool.clone();
tokio::spawn(async move {
let vm = pool.reserve().await?;
// Each task gets its own VM
Ok::<_, capsa::Error>(())
});
}Complete Example
A test runner using a VM pool:
use capsa::boot::LinuxDirectBoot;
use std::sync::Arc;
use std::time::Duration;
async fn run_tests(tests: Vec<String>) -> Result<(), capsa::Error> {
let pool = Arc::new(
capsa::vm_pool(LinuxDirectBoot::new("./kernel", "./initrd"))
.cpus(2)
.memory_mb(512)
.console_enabled()
.build(4)
.await?
);
let mut handles = vec![];
for test in tests {
let pool = pool.clone();
let handle = tokio::spawn(async move {
let vm = pool.reserve().await?;
let console = vm.console().await?;
// Wait for shell
console.wait_for("# ", Duration::from_secs(5)).await?;
// Run test
console.write_line(&format!("./run-test {}", test)).await?;
// VM released on drop
Ok::<_, capsa::Error>(())
});
handles.push(handle);
}
for handle in handles {
handle.await.expect("task panicked")?;
}
Ok(())
}Sandbox Pools
For sandboxes, use capsa::sandbox_pool() instead of capsa::vm_pool(). Sandbox pools provide the same pooling behavior but with the sandbox's guaranteed features (agent, auto-mounting, shared directories):
use std::sync::Arc;
use std::collections::HashMap;
let pool = Arc::new(
capsa::sandbox_pool()
.cpus(2)
.memory_mb(512)
.build(4)
.await?
);
// Run commands in parallel using the agent
let mut handles = vec![];
for arg in ["test1", "test2", "test3", "test4"] {
let pool = pool.clone();
let arg = arg.to_string();
handles.push(tokio::spawn(async move {
let sandbox = pool.reserve().await?;
let agent = sandbox.agent().await?;
let result = agent.exec("echo").arg(&arg).run().await?;
Ok::<_, capsa::Error>(result.stdout)
}));
}
for handle in handles {
let output = handle.await.expect("task panicked")?;
println!("{}", output);
}PooledSandbox provides:
- All
VmHandlemethods viaDeref .agent()to get a connectedAgentClientfor structured command execution
Best Practices
- Size pools based on concurrency - Match pool size to parallel workers
- Account for memory - Each VM uses its configured memory
- Release promptly - Don't hold VMs longer than needed
- Handle exhaustion - Use
try_reserve()for latency-sensitive code
Next Steps
- Custom Kernels - Configure VMs for pools
- Console - Interact with pooled VMs