Skip to content

Job Orchestration

Submit jobs via dashboard or API. BullMQ dispatch, real-time log streaming via WebSocket, automatic artifact storage.

Running a Python script on one box is easy; running thousands across a shared fleet, enforcing timeouts, capturing artifacts, and preserving logs for audit is where ad-hoc scripts collapse. Stout orchestrates jobs with a BullMQ-backed queue that respects resource locks, capability requirements, and per-job timeouts. Submit a job through the dashboard, the REST API, or a CI webhook, and the control plane places it in a queue keyed to the right box or box group.

When a suitable box is free, the dispatcher opens an authenticated channel to the Lager runtime, uploads the script bundle, and begins streaming stdout, stderr, and the custom output channel back to the control plane. Clients — the dashboard, the CLI, or your own tooling — subscribe over Server-Sent Events or WebSocket and see lines appear in real time with sub-second latency. Long-running jobs send a keepalive every 20 seconds so reverse proxies do not time out.

Artifacts land in object storage. When the job generates output files, Stout creates a presigned S3 upload URL, the box streams the artifact directly to storage, and the control plane records the checksum, size, and storage key. Artifacts are scoped to the job, the organization, and the role-based access control policy that applied when the job ran, so a revoked team member cannot retrieve historical outputs.

Every job has a timeout (default one hour, configurable per submission). A background timeout checker marks jobs that exceed their limit as `timed_out`, frees the resource lock, and emits a webhook event. Failed jobs, cancelled jobs, and jobs that could not find a suitable box are all distinct terminal states, so dashboards and alerting rules can act on the right signal. The full job lifecycle — queued, waiting for box, dispatching, running, uploading artifacts, completed — is persisted and searchable.

See it running on your fleet

Book a demo and we will walk through job orchestration against a live Lager box group.