Hypervisor and Storage

Hypervisor

Oxide’s hardware virtual machine solution is built on bhyve, an open source Virtual Machine Monitor (VMM) on illumos. The underlying technologies for the software stack also include:

  • Helios: Oxide’s illumos distribution, as the operating system for the host CPU in server sleds

  • Propolis: Oxide’s homegrown Rust-based userspace

Guest Workload Support

Oxide supports guest images that meet the following criteria:

  • Guest OS: major Linux distros, Windows

  • Boot mode: OS images enabled for UEFI booting

  • Device emulation: x86 images with VirtIO driver support

Network booting via the PXE protocol is also supported.

Note
Nested virtualization (i.e., VM within a VM) is unsupported. Windows versions that require the Trusted Platform Module (TPM), e.g., Windows 11, are also not supported.

Guest Facilities

The guest facilities available include idiomatic remote access systems such as SSH for Linux and Remote Desktop Protocol (RDP) for Windows. Serial console access is also available, allowing direct interaction with instances.

Storage

Physical Layer

Each server sled in the Oxide rack includes SSDs of different form factors:

  • Front-facing SSDs (10x) that store user data and system data such as software images, control plane metadata, and analytics. These form a shared storage pool across all sleds.

  • Internal SSDs (2x) that store a limited amount of system data, particularly system boot images.

Service Layer

Users can provision virtual disks that may be attached to instances. The operating system within an instance sees each disk as a NVMe device, accessible via its built-in driver.

Distributed Disks store data redundantly on 3 different disks in three different server sleds to maintain availability and integrity in the face of failures. Data transits the Oxide Rack’s internal network encrypted between the server hosting an instance and the servers hosting the 3 data copies. This type of disk is best for most uses.

Local Disks store data non-redundantly on the same server sled where an instance runs. There is a single copy of data; failure of the disk where it is stored will result in data loss. Local disks are optimized exclusively for performance. They are suitable only for use cases where data loss is acceptable and high performance is critical (e.g., caching), or in applications that provide their own data redundancy mechanisms (e.g., distributed data stores).

TypeDistributed disksLocal disks

Workload suitability

Persistent data requiring high reliability guarantee

Use cases where data loss is acceptable and performance is critical (e.g., cached data) or applications that provide their own data redundancy (e.g., distributed data stores)

Data redundancy

3 copies on three physical disks across different sleds

1 copy on a single sled (no redundancy), allocated on the same sled as the instance

Durability

Data is lost only if all three backends fail or are corrupted

Data is lost if the only backend fails or is corrupted

Availability

Guest read/write uninterrupted when one disk backend is offline, read-only when two backends are offline

Guest read/write prohibited when the only backend is offline

Snapshots

Yes

No

Encryption

Yes

No