Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.clawker.dev/llms.txt

Use this file to discover all available pages before exploring further.

Clawker containers are more than a simple docker run. Every container goes through a multi-phase initialization that sets up workspace mirroring, git integration, session persistence, credential forwarding, and security controls. This page explains what happens under the hood and why.

Container Lifecycle

When you run clawker run @ or clawker container create @, the container goes through four host-side phases before clawkerd takes over inside:
  1. Workspace — resolve the working directory, set up mounts, ensure volumes
  2. Config — initialize the Claude Code config volume (first run only)
  3. Environment — start the host proxy, forward git credentials, resolve environment variables
  4. Container — validate flags, build Docker configs, create the container, stream bootstrap material (mTLS leaf cert, CA cert, Hydra JWT) into the container’s writable layer, then start it
After docker start, the control plane attaches eBPF firewall programs from outside and dials into the container’s mTLS listener to drive a fifth phase inside the container — environment wiring, MCP setup, any agent.post_init script you’ve configured — before issuing the terminal AgentReady that tells clawkerd to fork the user CMD (Claude Code). Each host-side phase streams progress events to the terminal; clawkerd renders the in-container init steps on the same attached TTY so you see one unified boot log.

Create vs Start vs Run

CommandWhat it does
clawker container createRuns all four host-side init phases, produces a stopped container with bootstrap material staged
clawker container startStarts an existing container — clawkerd boots, CP attaches eBPF and dispatches CP-driven init, then forks the user CMD
clawker runCreate + Start in one step
The host-side init phases only run at creation time. Starting, stopping, and restarting a container does not re-run the host-side phases — your session data, config, and command history survive restarts. CP-driven init steps that need to re-run on every start (env wiring, etc.) are dispatched every time clawkerd boots; steps that should only run once (like your agent.post_init script) are gated by a marker file on the config volume.

Workspace Mounting

The most important mount is the workspace — your project source code made available inside the container. Clawker supports two workspace modes:

Bind Mode (default)

Your host directory is mounted directly into the container as a live bind mount. Changes on either side are visible immediately. This is the default because it gives Claude Code real-time access to your latest code.

Snapshot Mode

A one-time copy of your project is placed into a Docker volume. The container works on an isolated snapshot — changes inside the container don’t affect your host, and vice versa. Useful when you want Claude to experiment without risk.

Path Mirroring

Here’s something subtle but important: the workspace is not mounted at a generic path like /workspace. Instead, Clawker mirrors your host’s actual directory structure inside the container. If your project lives at /Users/schmitthub/Code/myapp on the host, it appears at /Users/schmitthub/Code/myapp inside the container too. The container’s working directory is set to match. Why? Claude Code tracks sessions by the current working directory. When you use /resume to pick up a previous conversation, it discovers your project’s git worktrees and looks for session files that match those paths. If the container used a synthetic path like /workspace, those session lookups would fail — the paths wouldn’t match what git reports, and Claude Code would say “No conversations found.” By mirroring the real host path, everything lines up naturally: sessions created in the container are findable by /resume.

Git Integration

Clawker makes git work seamlessly inside containers, even for advanced setups like worktrees.

Standard Repositories

For a normal (non-worktree) project, the .git directory is part of your workspace and gets mounted along with everything else. Git commands inside the container work exactly as they do on your host.

Worktree Support

Git worktrees are more complex. A worktree is a separate checkout of your repository that shares the same .git metadata as the main repo. The worktree directory contains a .git file (not a directory) that points back to the main repository’s .git/worktrees/<name>/ metadata. The challenge: those .git file references use absolute host paths. If the main repo is at /Users/schmitthub/Code/myapp, the worktree’s .git file says something like:
gitdir: /Users/schmitthub/Code/myapp/.git/worktrees/feature-branch
For git to work inside the container, that path must resolve. Clawker handles this by mounting the main repository’s .git directory at its original absolute path inside the container. The mount source and target are identical — if the .git directory lives at /Users/schmitthub/Code/myapp/.git on the host, it appears at exactly that path in the container. Combined with path mirroring for the worktree directory itself, git commands work correctly: the .git file’s reference resolves, git finds the shared metadata, and operations like git log, git status, and git commit all behave normally.

Credential Forwarding

Git credentials are forwarded into the container automatically based on your project’s security settings:
  • HTTPS — Clawker runs a host proxy that the container’s git credential helper calls through. Your host’s git credentials are never copied into the container.
  • SSH — SSH agent forwarding is handled via a socket bridge (muxrpc over docker exec). Your SSH keys stay on the host.
  • GPG — GPG agent forwarding works the same way as SSH, via the socket bridge.
  • Git config — Your ~/.gitconfig is bind-mounted read-only at /tmp/host-gitconfig inside the container. During CP-driven init, clawkerd reads that file, filters out any [credential] sections, and writes the sanitized result to ~/.gitconfig inside the container (which then uses its own credential forwarding).

Session Persistence

Claude Code sessions are preserved across container restarts through persistent Docker volumes.

Config Volume

Each agent gets a dedicated config volume (named clawker.<project>.<agent>-config) mounted at ~/.claude inside the container. This volume stores:
  • Session transcripts — Full conversation history as JSONL files under projects/<mangled-cwd>/ (where the directory name is derived from the working directory path, with non-alphanumeric characters replaced by hyphens)
  • Config state — The .config.json file tracking the last session ID, startup count, and project metadata
  • Plugins and settings — Any Claude Code plugins or settings that persist across sessions
Because this is a Docker volume, it survives container removal and recreation. As long as you use the same project and agent name, your sessions are preserved.

History Volume

A second volume (clawker.<project>.<agent>-history) preserves shell command history at /commandhistory, so your bash history carries over between sessions.

How Resume Works

When you type /resume inside Claude Code, it needs to find previous sessions for the current project. Here’s the discovery process:
  1. Claude Code reads the .git metadata in the current working directory
  2. It discovers all git worktrees associated with the repository (the main checkout plus any worktrees)
  3. For each worktree path, it looks for session files stored under that path’s identifier
  4. It presents matching sessions for you to resume
This is why path mirroring matters. The working directory inside the container must match a real path that git worktree discovery returns. If the container used /workspace as its working directory, Claude Code would store sessions under a /workspace identifier, but the worktree discovery step would return host-absolute paths. The identifiers wouldn’t match, and resume would find nothing. With path mirroring:
  • Container cwd = /Users/schmitthub/Code/myapp (matches the host)
  • Sessions stored under the /Users/schmitthub/Code/myapp identifier
  • Git worktree discovery returns /Users/schmitthub/Code/myapp in its list
  • /resume finds the sessions
Multiple containers working on the same project (but different agents) each get their own sessions that are all discoverable via /resume.

First-Run Initialization

The first time a container is created for a given project+agent combination, Clawker initializes the config volume:
  1. Onboarding bypass — The container image includes a seed config that marks onboarding as complete, so Claude Code doesn’t show the first-run wizard
  2. Host config copy — If agent.claude_code.config.strategy is set to copy (the default), your host’s Claude Code plugin configuration, installed plugins, agents, skills, and custom commands are copied into the config volume
  3. Credential injection — If agent.claude_code.use_host_auth is enabled, your host’s Claude Code credentials are copied so the container can authenticate without re-login
On subsequent container recreations with the same agent name, the config volume already exists and this initialization is skipped. Your existing sessions and settings are preserved.

Post-Init Scripts

If your clawker.yaml includes a post_init script, it runs once during the in-container init phase — dispatched by the control plane to clawkerd as a ShellCommand before the user CMD is forked. A marker file on the config volume prevents it from running again on subsequent restarts. To re-run, delete the marker (~/.claude/post-initialized) or remove the config volume. This is useful for project-specific setup like installing dependencies or configuring MCP servers — anything that needs to run inside the container with agent.env available, before Claude Code starts.

Volumes and Naming

Clawker uses a consistent naming scheme for all Docker resources:
ResourcePatternExample
Containerclawker.<project>.<agent>clawker.myapp.dev
Config volumeclawker.<project>.<agent>-configclawker.myapp.dev-config
History volumeclawker.<project>.<agent>-historyclawker.myapp.dev-history
Workspace volumeclawker.<project>.<agent>-workspaceclawker.myapp.dev-workspace (snapshot mode only)
All resources are tagged with labels (dev.clawker.project, dev.clawker.agent) for filtering and management. The clawker container ls command uses these labels to show only Clawker-managed containers.

Volume Lifecycle

  • Config and history volumes persist independently of containers. Removing a container does not remove its volumes. Use clawker volume prune to clean up orphaned volumes.
  • Workspace volumes (snapshot mode only) are ephemeral and tied to the container lifecycle.
  • Volume cleanup on failure — If container creation fails partway through, only volumes created during that attempt are cleaned up. Pre-existing volumes with your session data are never touched.

Container Image

Clawker builds custom Docker images tailored to your project. The image includes:
  • A base image (default: buildpack-deps:bookworm-scm)
  • System packages you’ve specified (build.packages)
  • Claude Code (installed via npm)
  • An agent awareness prompt at /etc/claude-code/CLAUDE.md
  • A baked-in copy of clawkerd at /usr/local/bin/clawkerd — the per-container daemon that runs as PID 1
  • Credential helper binaries (git-credential-clawker, socket-bridge server)
  • A non-root user (claude, UID 1001) with sudo access
The container’s ENTRYPOINT is clawkerd. See Custom Images for build customization.

Agent Awareness Prompt

Every Clawker image includes a prompt file baked in at /etc/claude-code/CLAUDE.md. Claude Code automatically loads this file, giving the agent awareness of its containerized environment without any user configuration. The prompt tells the agent:
  • What it can do — read/write workspace files, run commands, install packages, use git (credentials forwarded from host)
  • What it cannot do — modify firewall rules, access the host filesystem outside the workspace, manage other containers
  • How the firewall works — DNS queries for unlisted domains return NXDOMAIN, connection failures mean the domain isn’t allowlisted
  • How to help the user — when the agent hits a blocked domain, it explains the problem and suggests the correct clawker firewall add, clawker firewall bypass, or clawker firewall disable command for the user to run on the host
  • Environment diagnostics — lists environment variables (CLAWKER_PROJECT, CLAWKER_AGENT, CLAWKER_WORKSPACE_MODE, CLAWKER_FIREWALL_ENABLED, etc.) the agent can inspect for troubleshooting
This creates a self-service experience: when a network connection fails, the agent diagnoses it and tells the user exactly what to do — no manual troubleshooting required.

clawkerd: The Container’s PID 1

Every agent container runs clawkerd as PID 1. clawkerd is the per-container supervisor — it owns the container’s lifecycle, terminates correctly under docker stop, signs into the clawker control plane over mTLS, and only then forks the user CMD (Claude Code, by default) with kernel-side privilege drop. clawkerd itself runs as root inside the container. It does not drop its own privileges — privilege drop happens kernel-side, between fork and exec, on the child it spawns. The supervisor stays root to: write /var/log/clawker/clawkerd.log (rotated), read its mTLS bootstrap material, reap reparented orphan zombies via Wait4(-1), and hold open the mTLS listener.

Why PID 1 Matters

A PID 1 process in Linux has special responsibilities the kernel does not delegate elsewhere:
  • Signal handling. SIGTERM, SIGINT, SIGHUP are not handled by default for PID 1. A naive claude process running as PID 1 would ignore docker stop until the 10-second grace expired and Docker escalated to SIGKILL — leaving no time for clean teardown of Claude Code’s session state. clawkerd installs explicit signal handlers and forwards forwardable signals to the user CMD’s process group.
  • Zombie reaping. Any orphaned process whose parent dies is reparented to PID 1. If PID 1 doesn’t call Wait4(-1), those zombies accumulate forever. clawkerd runs a two-phase reaper: phase 1 reaps only the user CMD (so concurrent exec pipelines from CP don’t get their child stolen); phase 2 drains orphans after the user CMD has exited.
  • Exit code propagation. Docker reads PID 1’s exit code as the container’s exit code, which is what restart: on-failure switches on. clawkerd maps the user CMD’s wait status to the bash convention (WEXITSTATUS for normal exit, 128 + signum for signaled) so Docker’s restart policy interprets it correctly.

Boot Sequence

When the container starts, clawkerd executes in this order:
  1. Read bootstrap material. Four files in /run/clawker/bootstrap (per-agent mTLS leaf cert + key, the clawker CA cert, and a single-use Hydra JWT). The CLI streams these into the container as a tar archive between docker create and docker start — they live on the container’s writable layer, not on tmpfs or a bind mount.
  2. Resolve environment. CLAWKER_AGENT is required; CLAWKER_PROJECT may be empty (for orphan-project containers). CLAWKER_USER (defaults to claude) is resolved against /etc/passwd to populate the kernel-side privilege-drop credentials for the eventual spawn.
  3. Start the mTLS listener. clawkerd serves a gRPC ClawkerdService.Session endpoint on :7700, reachable only over the clawker-net network. The listener enforces RequireAndVerifyClientCert plus a CN pin: the only authorized peer is the clawker control plane’s cert (CN = clawker-controlplane). Any other peer is rejected at the TLS layer, before any RPC handler runs.
  4. Wait for CP-driven init. The control plane dials in over mTLS, registers the agent’s identity (binding the container ID to the captured cert thumbprint in CP’s sqlite agent registry), and then dispatches one or more ShellCommand steps — environment setup, MCP wiring, your agent.post_init script if configured. Each step’s exit code, stdout, and stderr flow back over the same Session stream.
  5. Fork the user CMD on AgentReady. When CP issues the terminal AgentReady command, clawkerd forks the user CMD (default: claude) using SysProcAttr.Credential so the kernel performs setgroups → setgid → setuid between fork and exec. The user CMD never runs as root; clawkerd does, but it never executes user-controlled code in its own process.
  6. Supervise. clawkerd holds open the Session for the container’s lifetime — CP can dispatch additional ShellCommands, signal-forward into the child pgroup, or trigger a clean shutdown.
  7. Reap and exit. On docker stop (SIGTERM) or when the user CMD exits, clawkerd runs an ordered teardown: stop the listener, run phase-2 orphan drain, then os.Exit with the bash-convention exit code so Docker’s restart policy reads the right value.
If the user CMD exits cleanly (e.g. you /exit from Claude Code), clawkerd exits with the same code and the container stops. If you used --rm, Docker removes the container. If you didn’t, you can start it again later with clawker start -a -i --agent <agent> — the config and history volumes still hold your settings and shell history.

Privilege Model

ProcessUIDWhy
clawkerd (PID 1)0 (root)mTLS listener, log writes, bootstrap reads, Wait4(-1) orphan reap
User CMD (claude by default)1001 (claude)Forked with SysProcAttr.Credential; kernel performs the privilege drop
Anything the user CMD spawns1001 (claude)Inherits from the user CMD; never reaches clawkerd’s privilege
The kernel performs the credential switch atomically between fork and exec, so there is no user-mode code path that holds root and then drops it. clawkerd’s ShellCommand surface (CP-dispatched, root-capable) is the entire trust boundary. The CN-pinned mTLS listener (CP is the sole authorized caller) is what protects it. There is no per-command argv allow-list — anything that can mint a clawker-controlplane-CN cert chained to the clawker CA gets root-equivalent code execution inside the container. Bootstrap material lives only on the container’s writable layer, which dies with --rm or docker rm.

What clawkerd Does Not Do

  • No proactive outbound dial outside the one-time CP-triggered Register handshake. clawkerd is a server — CP dials in, not the other way around.
  • No heartbeat or keepalive. CP knows liveness via Docker events plus the Session stream’s TCP keepalives.
  • No init-script execution as part of clawkerd’s own logic. Setup steps (env wiring, MCP install, agent.post_init) are dispatched by CP over the Session as ShellCommands — clawkerd just runs them and reports the result.
  • No firewall configuration. eBPF cgroup programs are attached to the container’s cgroup from outside by the control plane. clawkerd has no role in firewall enforcement.
  • No reconnect logic. clawkerd serves; if CP loses the connection, CP reconnects with backoff. clawkerd just waits for the next dial.

Boot Progress on the Terminal

When you attach an interactive terminal (-it), clawkerd renders per-step boot progress in plain text: an “Active” status line for the current step, then a check mark () or cross () when the step finishes. Steps run in milliseconds typically, so there’s no spinner animation — just a clean log of what CP dispatched and how it resolved. On a non-TTY (e.g. --detach), the same events go to clawkerd’s structured log at /var/log/clawker/clawkerd.log (rotated, 50MB × 3 backups, 7-day retention). After the final AgentReady step completes, clawkerd transfers TTY foreground to the user CMD and Claude Code takes over the terminal.

Firewall Decoupling

The firewall is intentionally not configured inside the container. eBPF cgroup programs are attached from outside by the control plane before the user CMD starts, and they remain attached for the container’s lifetime. This means:
  • The container image has no baked-in firewall addresses or capabilities — it is portable across different Docker network configurations.
  • Agent containers run with no Linux capabilities (cap_add: []). No NET_ADMIN, no NET_RAW. The firewall enforcement happens kernel-side, outside the container’s privilege scope.
  • A compromised user CMD cannot modify, weaken, or bypass the firewall from inside — the eBPF programs, the BPF maps, and the route table are all in kernel space owned by the host, not the container.
If you need extra capabilities for your workflow (e.g. SYS_PTRACE for debugging), add them to security.cap_add in your .clawker.yaml. The firewall does not require any.

Shared Directory

When agent.enable_shared_dir is set to true, Clawker mounts a shared directory from ~/.local/share/clawker/share/ on the host into the container at ~/.clawker-share (read-only). This lets you share files across all agents without including them in the workspace.

Security Controls

Every container runs with a deny-by-default network firewall. See Security for the full details on:
  • The bare-bones hardcoded allowlist and why you must configure your own domains
  • Docker socket access
  • Linux capabilities
  • Agent awareness prompt
  • Credential isolation