The Oxide Omni infrastructure provider is used to provision Talos Linux instances on Oxide and connect those instances to Omni where they will be configured as nodes in a Kubernetes cluster.
This guide describes how to use the Oxide Omni infrastructure provider to deploy a Kubernetes cluster on Oxide using Omni.
Requirements
To follow this guide, you’ll need the following.
Access to Omni using an account with the
Adminrole. We’ll create an infrastructure provider, machine class, and Kubernetes cluster within Omni. Creating the infrastructure provider requires theAdminrole.A container runtime. We’ll run the infrastructure provider as a container.
An Oxide project. The infrastructure provider will create Oxide instances running Talos Linux within this project using the project’s
defaultVPC.Oxide API credentials. Refer to Authentication for instructions on generating and using API credentials.
Infrastructure Provider Design
The infrastructure provider is a dynamic provider, meaning it provisions and deprovisions Oxide instances on demand as Omni users create and scale Kubernetes clusters.
The infrastructure provider is designed to manage resources within a single Oxide silo. To support multiple Oxide silos, run a separate instance of the infrastructure provider using a different provider ID. The infrastructure provider uses machine classes to specify the Oxide project instances are provisioned within.
This guide will run a single infrastructure provider with provider ID
oxide that’s connected to a single Oxide silo specified by OXIDE_HOST and
OXIDE_TOKEN.
Install and configure omnictl
Log into Omni and navigate to the home page.
Click the Download omnictl button to download the
omnictlbinary for your operating system.Place the
omnictlbinary on yourPATH.Ensure the
omnictlbinary is executable.
Click the Download omniconfig button to download
omniconfig.yaml. This is used to authenticateomnictlwith Omni.Place the
omniconfig.yamlfile in your current working directory.
Run the following command to begin the authentication flow between
omnictland Omni.omnictl --omniconfig omniconfig.yaml user listomnictlwill prompt to log in via web browser. Log in and click Grant Access to complete authentication. Once finished, theomnictloutput will look similar to the following.Could not authenticate: open /home/example/.talos/keys/default-oxide@example.com.pgp: no such file or directory
Attempting to open URL: https://example.na-west-1.omni.siderolabs.io/authenticate?flow=cli&public-key-id=46936543e7a4beab36b344093dd67976b2420014
Opening in existing browser session.
Public key 46936543e7a4beab36b344093dd67976b2420014 is now registered for user example@oxidecomputer.com
PGP key saved to /home/example/.talos/keys/default-example@oxidecomputer.com.pgp
ID EMAIL ROLE LABELS
4e1aa1fc-0219-4cfe-9836-82213495cde6 example@oxidecomputer.com Admin
Create the Infrastructure Provider
Run the following command to create the infrastructure provider.
omnictl --omniconfig omniconfig.yaml infraprovider create oxideA service account for the infrastructure provider will be created and its credentials shown as the
OMNI_ENDPOINTandOMNI_SERVICE_ACCOUNT_KEYenvironment variables.Your infra provider "oxide" is ready to use
Created infra provider service account "infra-provider:oxide" with public key ID "74869a01552c3f81a7c436966f6d95241663de57"
Set the following environment variables to use the service account:
OMNI_ENDPOINT=https://oxide.na-west-1.omni.siderolabs.io:443
OMNI_SERVICE_ACCOUNT_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
Note: Store the service account key securely, it will not be displayed again
Please use the endpoint and the service account key to set up the infra providerTake note of the
OMNI_ENDPOINTandOMNI_SERVICE_ACCOUNT_KEYenvironment variables. They will be used in the Run the Infrastructure Provider section.
OMNI_ENDPOINT and OMNI_SERVICE_ACCOUNT_KEY
environment variables will conflict with --omniconfig and prevent omnictl
from working. Export them in a new shell session that’ll be used during
Run the Infrastructure Provider.Run the Infrastructure Provider
Create a new shell session separate from the shell session where you were running
omnictl. This is needed for the following reasons.The infrastructure provider will run as a long-running service.
The
OMNI_*environment variables conflict with--omniconfig.
Export the
OMNI_ENDPOINTandOMNI_SERVICE_ACCOUNT_KEYenvironment variables.export OMNI_ENDPOINT=https://oxide.na-west-1.omni.siderolabs.io:443
export OMNI_SERVICE_ACCOUNT_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==Export the
OXIDE_HOSTandOXIDE_TOKENenvironment variables.export OXIDE_HOST='https://oxide.sys.example.com'
export OXIDE_TOKEN='oxide-token-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'Run the infrastructure provider. Update
${TAG}to use the latest stable Oxide infrastructure provider, without thevprefix, which can be found on GitHub releases.docker run --rm \
--env OMNI_ENDPOINT \
--env OMNI_SERVICE_ACCOUNT_KEY \
--env OXIDE_HOST \
--env OXIDE_TOKEN \
ghcr.io/oxidecomputer/omni-infra-provider-oxide:${TAG}The infrastructure provider will log the following if it started successfully.
{"level":"info","ts":1766026094.796517,"caller":"src/main.go:60","msg":"successfully parsed configuration","configuration":" --build=v0.1.0\n --desc=Oxide Omni infrastructure provider.\n --omni-endpoint=https://example.na-west-1.omni.siderolabs.io\n --omni-service-account-key=xxxxxx\n --oxide-host=https://oxide.sys.example.com\n --oxide-token=xxxxxx\n --provider-description=Oxide Omni infrastructure provider.\n --provider-id=oxide\n --provider-name=Oxide\n --provisioner-concurrency=1"}
{"level":"info","ts":1766026094.7965825,"caller":"src/main.go:98","msg":"starting..."}In the previous shell session where you were running
omnictlcommands, verify theoxideinfrastructure provider is listed, connected, and has no errors.$ omnictl --omniconfig omniconfig.yaml infraprovider list
ID NAME DESCRIPTION CONNECTED ERROR
oxide Oxide Oxide Omni infrastructure provider. true
Create the Machine Class
The machine class specifies the configuration for the Talos Linux instances that the infrastructure provider provisions.
Create a
machine-class.yamlfile with the following content. Change${OXIDE_PROJECT}to use your Oxide project.---
metadata:
namespace: default
type: MachineClasses.omni.sidero.dev
id: oxide
spec:
autoprovision:
providerid: oxide
providerdata: |
vcpus: 2
project: ${OXIDE_PROJECT}
memory: 8
disk_size: 10
vpc: default
subnet: defaultApply the manifest to create the machine class.
omnictl --omniconfig omniconfig.yaml apply --file machine-class.yamlVerify the machine class was successfully created.
$ omnictl --omniconfig omniconfig.yaml get machineclass
NAMESPACE TYPE ID VERSION
default MachineClass oxide 1
Create the Kubernetes Cluster
Create a
kubernetes-cluster.yamlfile with the following content. This will create a single-node Kubernetes cluster namedexampleusing theoxideinfrastructure provider.# Kubernetes cluster object.
---
metadata:
namespace: default
type: Clusters.omni.sidero.dev
id: example
spec:
talosversion: 1.11.6
kubernetesversion: 1.34.3
# Untaint control plane nodes.
---
metadata:
namespace: default
type: ConfigPatches.omni.sidero.dev
id: example-control-planes-untaint
labels:
omni.sidero.dev/system-patch:
omni.sidero.dev/cluster: example
omni.sidero.dev/machine-set: example-control-planes
spec:
data: |
cluster:
allowSchedulingOnControlPlanes: true
# 0 worker nodes.
---
metadata:
namespace: default
type: MachineSets.omni.sidero.dev
id: example-workers
labels:
omni.sidero.dev/cluster: example
omni.sidero.dev/role-worker:
spec:
updatestrategy: 1
# 1 control plane node using the `oxide` infrastructure provider.
---
metadata:
namespace: default
type: MachineSets.omni.sidero.dev
id: example-control-planes
labels:
omni.sidero.dev/cluster: example
omni.sidero.dev/role-controlplane:
spec:
updatestrategy: 1
machineallocation:
name: oxide
machinecount: 1Apply the manifest to create the Kubernetes cluster.
omnictl --omniconfig omniconfig.yaml apply --file kubernetes-cluster.yamlVerify the Kubernetes cluster was successfully created.
$ omnictl --omniconfig omniconfig.yaml cluster status example
Cluster "example" RUNNING Ready (1/1) (healthy/total)
├── Kubernetes Upgrade Done
├── Talos Upgrade Done
├── Control Plane "example-control-planes" Running Ready (1/1)
│ ├── Load Balancer Ready
│ ├── Status Checks OK
│ └── Machine "41a5f638-f96e-4647-851c-2ded00660d02" RUNNING Ready
└── Workers "example-workers" Running Ready (0/0)
Run a Workload on the Oxide Kubernetes Cluster
Follow Use
kubectlwith Omni to connect to the Kubernetes cluster created in Create the Kubernetes Cluster.Run an example workload on the Kubernetes cluster.
$ kubectl apply -f https://k8s.io/examples/application/deployment.yaml
deployment.apps/nginx-deployment createdVerify the example workload returns ready.
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 85s
Delete the Kubernetes Cluster
Use
omnictl cluster deleteto delete the Kubernetes cluster.$ omnictl --omniconfig omniconfig.yaml cluster delete example
* tearing down Clusters.omni.sidero.dev(example)
* destroyed Clusters.omni.sidero.dev(example)
Stop the Infrastructure Provider
Navigate to the shell session that’s running the infrastructure provider.
Stop the infrastructure provider using Ctrl+C.