Studio is the local web UI you get when you runDocumentation Index
Fetch the complete documentation index at: https://arkor-92aeef0e-eng-574.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
arkor dev. It is not a separate service to sign into; it boots on your machine, talks to the same Arkor CLI process, and goes away when you stop the dev server.
What Studio is for
Three jobs:- Start a training run. A “Run training” button submits the job to the managed backend by spawning
arkor startunder the hood.arkor startruns the existing.arkor/build/index.mjsartifact (and only auto-builds it when missing); see the dev-loop note below for how Studio keeps that artifact fresh. - See training happen. A jobs list with live status, a loss chart that updates as the run streams in, and a tail of training events. You can leave it open in a tab while you work on other things.
- Try a finished model. A Playground page lets you pick the base model or the final adapter from any completed job and chat with it. The Playground does not load intermediate checkpoints; for mid-run inference, use
onCheckpointcallbacks in your trainer.
/api/manifest endpoint rebuilds and re-imports your trainer on every request (with a cache-bust query, see packages/arkor/src/studio/manifest.ts), but the UI only fetches it when the Run training page mounts. So if you edit src/arkor/ and stay on the same Run training page, the next click reuses the existing .arkor/build/index.mjs and runs your old code. Refresh the page (or run the build script from the terminal: pnpm build / npm run build / yarn build / bun run build) between edits and clicks to pick up the new code reliably.
Where Studio runs
When you startarkor dev, the CLI:
- Boots a Hono server on
127.0.0.1:4000(use-pto change the port). - Serves a Vite + React SPA from the same origin so the UI talks to the CLI through
/api/*on loopback. - Issues a per-launch CSRF token (saved to
~/.arkor/studio-tokenwith mode0600) and requires every request to present it.
Host header. There is no public URL, no inbound connection, and the token rotates every time you start arkor dev. You can run Studio safely on a shared dev machine without worrying about exposing your training data.
How Studio fits with the managed backend
Studio is just a viewer for what your CLI is doing. The CLI talks to the managed backend over authenticated HTTPS; Studio asks the CLI (over loopback) what to render.~/.arkor/credentials.json, and Studio inherits them by virtue of running locally.
If ~/.arkor/credentials.json is missing, the entry point decides what to do. arkor dev bootstraps an anonymous session at launch and prints a one-line hint pointing at arkor login --oauth so you can upgrade to a real account whenever you want; it never auto-launches the OAuth flow. The one outage case where it does not bootstrap is when /v1/auth/cli/config itself is unreachable on first run: the same transport error is rethrown and arkor dev exits fast (see arkor dev for the exact recovery story). The Studio server’s lazy bootstrap (when an /api/* request arrives before credentials are on disk) does the same anonymous fallback. To use an account session, run arkor login --oauth separately before (or after) clicking around in Studio; the credentials file is shared, so Studio picks up the account session on its next request.
What you actually see
The current views are intentionally small:- Jobs. Status, name, created time, and ID, polled every few seconds.
- Job detail. Loss chart, log tail (most recent events), and live status. Streamed via Server-Sent Events so it stays current without manual refresh.
- Playground. Adapter selector (base model or the final adapter from any completed job), a chat UI, and a streaming response. Calls flow through the CLI, then to the managed inference endpoint. The Playground only lists jobs that have finished. To run inference against an intermediate checkpoint while a run is still in flight, use
onCheckpointcallbacks instead.
When not to use Studio
Studio is a development tool. It runs on your machine, only on loopback, and only whilearkor dev is up. For production usage of a fine-tuned model you would call infer from your own application code (or wire it into whatever serving layer you ship), not point users at Studio.