Applications and workloads
An application is the unit you deploy with edgible stack deploy. It is a single canonical-v3 YAML document describing one or more workloads, optionally some storage, and one or more access entries that connect the workloads to the public internet.
apiVersion: v3kind: Applicationmetadata: name: my-app organization: <org-id>spec: placement: { ... } workloads: [ ... ] storage: [ ... ] # optional access: [ ... ] cloud: { ... } # optional, only when placement.strategy is cloud or automaticThe full field-by-field reference is in Application YAML (v3). This page covers the model.
Workloads
Section titled “Workloads”A workload is something that runs. Edgible supports five workload types — pick the one that matches how your code actually runs today rather than rewriting it.
compose — Docker Compose stack
Section titled “compose — Docker Compose stack”workloads: - name: api type: compose composeFile: ./docker-compose.yml ports: - { name: http, containerPort: 3000, protocol: tcp }The agent runs docker compose up -d against the file you supply. Multi-container stacks, named volumes, environment files, all work the way you’d expect from Compose. Best fit for most production workloads.
docker — single container
Section titled “docker — single container”workloads: - name: redis type: docker image: redis:7-alpine ports: - { name: redis, containerPort: 6379, protocol: tcp }For a one-image, no-extras case where authoring a Compose file is overkill. Behind the scenes the agent generates a minimal Compose file and uses the same orchestrator.
managed-process — agent runs a binary
Section titled “managed-process — agent runs a binary”workloads: - name: worker type: managed-process command: "./bin/worker --mode production" workingDir: /opt/my-app logFile: /var/log/my-app/worker.log ports: - { name: metrics, containerPort: 9090, protocol: tcp }The agent supervises a long-running command directly — no container. Restart-on-failure is handled by the agent. Best for systems languages that already produce a single binary, or anything you don’t want to containerise.
vm — virtual machine
Section titled “vm — virtual machine”spec: storage: - { name: vm-disk, type: persistent, size: 20Gi, mobility: movable } workloads: - name: legacy type: vm vmBackend: qemu memory: 2048 cpus: 2 storage: - { name: vm-disk, mountPath: /disk }For workloads that genuinely need a kernel of their own. qemu, firecracker, and wsl backends are supported. Declaring the disk via a storage[] mount (rather than an inline diskImage path) is the recommended form — it’s what’s required if you want to migrate the VM between devices.
pre-existing — point at something already running
Section titled “pre-existing — point at something already running”workloads: - name: existing type: pre-existing hostPort: 5432 ports: - { name: postgres, containerPort: 5432, protocol: tcp }Edgible doesn’t start or stop the process — it assumes a service is already listening on hostPort on the device’s loopback. Use this for systemd-managed daemons, processes you supervise yourself, or anything you’d rather Edgible not touch.
Multiple workloads in one application
Section titled “Multiple workloads in one application”A single application can contain several workloads. They are deployed in dependency order — declare dependsOn on a workload to make it wait for another to be ready before starting.
workloads: - name: db type: compose composeFile: ./db.yml - name: api type: compose composeFile: ./api.yml dependsOn: [db]This is the right shape for an app where the pieces are tightly coupled and always deployed together. For loosely-coupled apps that should still be deployed in order, prefer separate applications joined by metadata.dependsOn — see Stack with dependencies.
Where it runs
Section titled “Where it runs”spec.placement chooses where the application is scheduled.
# A device you ownplacement: strategy: serving-device deviceSelector: { deviceName: my-first }
# Edgible-operated cloud infrastructureplacement: strategy: cloud region: us-east-1Cloud placement runs your application on a Firecracker microVM that Edgible operates in the region you choose. From the application’s perspective the model is the same as a device you own — same WireGuard mesh, same Caddy + workload model — but you don’t run any hardware. See Cloud hosting.
Storage
Section titled “Storage”Persistent storage is declared at the application level and referenced by workloads:
spec: storage: - { name: pgdata, type: persistent, size: 20Gi, mobility: movable } workloads: - name: db type: compose storage: - { name: pgdata, mountPath: /var/lib/postgresql/data }The agent provisions the volume on the host and exposes it to the workload via an EDGIBLE_STORAGE_<NAME> environment variable. Storage is bound to the device the application is placed on; the mobility field controls whether it can move with the application. See Migrate between devices for the full migration flow.
Reconciliation and versions
Section titled “Reconciliation and versions”When you stack deploy, the CLI POSTs the application document to the control plane. The control plane stores it as an immutable declaration version, then pushes an application_update to the agent on the placed device. The agent compares the new desired state to the current actual state and applies the diff:
- Workloads that don’t exist yet are created.
- Workloads whose definition has changed are recreated.
- Workloads that no longer appear are torn down.
- Caddy is reconfigured to match the new
accessentries. - TLS certificates are requested for any new hostnames.
Every deploy creates a new declaration version with a monotonically increasing number. The platform records the version with each operation, and edgible application rollback can revert the declaration to a prior version.
The deployment is declarative: re-applying the same YAML produces the same state. Re-applying a modified YAML moves the system toward the new state. There is no imperative “restart this workload” command in the deployment path — change the desired state and let the agent reconcile.