Omni

The Oxide Omni infrastructure provider is used to provision Talos Linux instances on Oxide and connect those instances to Omni where they will be configured as nodes in a Kubernetes cluster.

This guide describes how to use the Oxide Omni infrastructure provider to deploy a Kubernetes cluster on Oxide using Omni.

Requirements

To follow this guide, you’ll need the following.

  • Access to Omni using an account with the Admin role. We’ll create an infrastructure provider, machine class, and Kubernetes cluster within Omni. Creating the infrastructure provider requires the Admin role.

  • A container runtime. We’ll run the infrastructure provider as a container.

  • An Oxide project. The infrastructure provider will create Oxide instances running Talos Linux within this project using the project’s default VPC.

  • Oxide API credentials. Refer to Authentication for instructions on generating and using API credentials.

Infrastructure Provider Design

The infrastructure provider is a dynamic provider, meaning it provisions and deprovisions Oxide instances on demand as Omni users create and scale Kubernetes clusters.

The infrastructure provider is designed to manage resources within a single Oxide silo. To support multiple Oxide silos, run a separate instance of the infrastructure provider using a different provider ID. The infrastructure provider uses machine classes to specify the Oxide project instances are provisioned within.

This guide will run a single infrastructure provider with provider ID oxide that’s connected to a single Oxide silo specified by OXIDE_HOST and OXIDE_TOKEN.

Install and configure omnictl

  1. Log into Omni and navigate to the home page.

  2. Click the Download omnictl button to download the omnictl binary for your operating system.

    1. Place the omnictl binary on your PATH.

    2. Ensure the omnictl binary is executable.

  3. Click the Download omniconfig button to download omniconfig.yaml. This is used to authenticate omnictl with Omni.

    1. Place the omniconfig.yaml file in your current working directory.

  4. Run the following command to begin the authentication flow between omnictl and Omni.

    omnictl --omniconfig omniconfig.yaml user list
  5. omnictl will prompt to log in via web browser. Log in and click Grant Access to complete authentication. Once finished, the omnictl output will look similar to the following.

    Could not authenticate: open /home/example/.talos/keys/default-oxide@example.com.pgp: no such file or directory
    Attempting to open URL: https://example.na-west-1.omni.siderolabs.io/authenticate?flow=cli&public-key-id=46936543e7a4beab36b344093dd67976b2420014
    Opening in existing browser session.
    Public key 46936543e7a4beab36b344093dd67976b2420014 is now registered for user example@oxidecomputer.com
    PGP key saved to /home/example/.talos/keys/default-example@oxidecomputer.com.pgp
    ID EMAIL ROLE LABELS
    4e1aa1fc-0219-4cfe-9836-82213495cde6 example@oxidecomputer.com Admin

Create the Infrastructure Provider

  1. Run the following command to create the infrastructure provider.

    omnictl --omniconfig omniconfig.yaml infraprovider create oxide
  2. A service account for the infrastructure provider will be created and its credentials shown as the OMNI_ENDPOINT and OMNI_SERVICE_ACCOUNT_KEY environment variables.

    Your infra provider "oxide" is ready to use
    Created infra provider service account "infra-provider:oxide" with public key ID "74869a01552c3f81a7c436966f6d95241663de57"

    Set the following environment variables to use the service account:
    OMNI_ENDPOINT=https://oxide.na-west-1.omni.siderolabs.io:443
    OMNI_SERVICE_ACCOUNT_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==

    Note: Store the service account key securely, it will not be displayed again
    Please use the endpoint and the service account key to set up the infra provider
  3. Take note of the OMNI_ENDPOINT and OMNI_SERVICE_ACCOUNT_KEY environment variables. They will be used in the Run the Infrastructure Provider section.

Warning
Exporting the OMNI_ENDPOINT and OMNI_SERVICE_ACCOUNT_KEY environment variables will conflict with --omniconfig and prevent omnictl from working. Export them in a new shell session that’ll be used during Run the Infrastructure Provider.

Run the Infrastructure Provider

  1. Create a new shell session separate from the shell session where you were running omnictl. This is needed for the following reasons.

    1. The infrastructure provider will run as a long-running service.

    2. The OMNI_* environment variables conflict with --omniconfig.

  2. Export the OMNI_ENDPOINT and OMNI_SERVICE_ACCOUNT_KEY environment variables.

    export OMNI_ENDPOINT=https://oxide.na-west-1.omni.siderolabs.io:443
    export OMNI_SERVICE_ACCOUNT_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
  3. Export the OXIDE_HOST and OXIDE_TOKEN environment variables.

    export OXIDE_HOST='https://oxide.sys.example.com'
    export OXIDE_TOKEN='oxide-token-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
  4. Run the infrastructure provider. Update ${TAG} to use the latest stable Oxide infrastructure provider, without the v prefix, which can be found on GitHub releases.

    docker run --rm \
    --env OMNI_ENDPOINT \
    --env OMNI_SERVICE_ACCOUNT_KEY \
    --env OXIDE_HOST \
    --env OXIDE_TOKEN \
    ghcr.io/oxidecomputer/omni-infra-provider-oxide:${TAG}
  5. The infrastructure provider will log the following if it started successfully.

    {"level":"info","ts":1766026094.796517,"caller":"src/main.go:60","msg":"successfully parsed configuration","configuration":"    --build=v0.1.0\n    --desc=Oxide Omni infrastructure provider.\n    --omni-endpoint=https://example.na-west-1.omni.siderolabs.io\n    --omni-service-account-key=xxxxxx\n    --oxide-host=https://oxide.sys.example.com\n    --oxide-token=xxxxxx\n    --provider-description=Oxide Omni infrastructure provider.\n    --provider-id=oxide\n    --provider-name=Oxide\n    --provisioner-concurrency=1"}
    {"level":"info","ts":1766026094.7965825,"caller":"src/main.go:98","msg":"starting..."}
  6. In the previous shell session where you were running omnictl commands, verify the oxide infrastructure provider is listed, connected, and has no errors.

    $ omnictl --omniconfig omniconfig.yaml infraprovider list
    ID NAME DESCRIPTION CONNECTED ERROR
    oxide Oxide Oxide Omni infrastructure provider. true

Create the Machine Class

The machine class specifies the configuration for the Talos Linux instances that the infrastructure provider provisions.

  1. Create a machine-class.yaml file with the following content. Change ${OXIDE_PROJECT} to use your Oxide project.

    ---
    metadata:
    namespace: default
    type: MachineClasses.omni.sidero.dev
    id: oxide
    spec:
    autoprovision:
    providerid: oxide
    providerdata: |
    vcpus: 2
    project: ${OXIDE_PROJECT}
    memory: 8
    disk_size: 10
    vpc: default
    subnet: default
  2. Apply the manifest to create the machine class.

    omnictl --omniconfig omniconfig.yaml apply --file machine-class.yaml
  3. Verify the machine class was successfully created.

    $ omnictl --omniconfig omniconfig.yaml get machineclass
    NAMESPACE TYPE ID VERSION
    default MachineClass oxide 1

Create the Kubernetes Cluster

  1. Create a kubernetes-cluster.yaml file with the following content. This will create a single-node Kubernetes cluster named example using the oxide infrastructure provider.

    # Kubernetes cluster object.
    ---
    metadata:
    namespace: default
    type: Clusters.omni.sidero.dev
    id: example
    spec:
    talosversion: 1.11.6
    kubernetesversion: 1.34.3

    # Untaint control plane nodes.
    ---
    metadata:
    namespace: default
    type: ConfigPatches.omni.sidero.dev
    id: example-control-planes-untaint
    labels:
    omni.sidero.dev/system-patch:
    omni.sidero.dev/cluster: example
    omni.sidero.dev/machine-set: example-control-planes
    spec:
    data: |
    cluster:
    allowSchedulingOnControlPlanes: true

    # 0 worker nodes.
    ---
    metadata:
    namespace: default
    type: MachineSets.omni.sidero.dev
    id: example-workers
    labels:
    omni.sidero.dev/cluster: example
    omni.sidero.dev/role-worker:
    spec:
    updatestrategy: 1

    # 1 control plane node using the `oxide` infrastructure provider.
    ---
    metadata:
    namespace: default
    type: MachineSets.omni.sidero.dev
    id: example-control-planes
    labels:
    omni.sidero.dev/cluster: example
    omni.sidero.dev/role-controlplane:
    spec:
    updatestrategy: 1
    machineallocation:
    name: oxide
    machinecount: 1
  2. Apply the manifest to create the Kubernetes cluster.

    omnictl --omniconfig omniconfig.yaml apply --file kubernetes-cluster.yaml
  3. Verify the Kubernetes cluster was successfully created.

    $ omnictl --omniconfig omniconfig.yaml cluster status example
    Cluster "example" RUNNING Ready (1/1) (healthy/total)
    ├── Kubernetes Upgrade Done
    ├── Talos Upgrade Done
    ├── Control Plane "example-control-planes" Running Ready (1/1)
    ├── Load Balancer Ready
    ├── Status Checks OK
    └── Machine "41a5f638-f96e-4647-851c-2ded00660d02" RUNNING Ready
    └── Workers "example-workers" Running Ready (0/0)

Run a Workload on the Oxide Kubernetes Cluster

  1. Follow Use kubectl with Omni to connect to the Kubernetes cluster created in Create the Kubernetes Cluster.

  2. Run an example workload on the Kubernetes cluster.

    $ kubectl apply -f https://k8s.io/examples/application/deployment.yaml
    deployment.apps/nginx-deployment created
  3. Verify the example workload returns ready.

    $ kubectl get deployment
    NAME READY UP-TO-DATE AVAILABLE AGE
    nginx-deployment 2/2 2 2 85s

Delete the Kubernetes Cluster

  1. Use omnictl cluster delete to delete the Kubernetes cluster.

    $ omnictl --omniconfig omniconfig.yaml cluster delete example
    * tearing down Clusters.omni.sidero.dev(example)
    * destroyed Clusters.omni.sidero.dev(example)

Stop the Infrastructure Provider

  1. Navigate to the shell session that’s running the infrastructure provider.

  2. Stop the infrastructure provider using Ctrl+C.