CLI Flags
All flags can be passed to deno run -A jsr:@cdaringe/ralphmania.
| Flag | Short | Type | Description |
|---|---|---|---|
--agent | -a | string | AI agent to use (default: claude) |
--iterations | -i | number | Maximum number of agent iterations to run (required) |
--plugin | -p | string | Path to a plugin file exporting a RalphPlugin default |
--reset-worktrees | — | boolean | Remove and re-create all worker git worktrees on start |
--gui | — | boolean | Open the live GUI in the browser while the orchestrator runs |
Plugin Hooks
Plugins export a named plugin object conforming to the Plugin type. All hooks are optional and async-friendly.
| Hook | Called when | Can modify |
|---|---|---|
onConfigResolved | Once, before the loop starts | coder, verifier, escalated, iterations, level, parallel, gui, guiPort, resetWorktrees, specFile, progressFile |
onModelSelected | Each iteration, after model resolution | ModelSelection (model, provider) |
onPromptBuilt | Each iteration, after prompt construction | prompt string |
onSessionConfigBuilt | Each iteration, after session config assembly | AgentSessionConfig (provider, model, workingDir) |
onIterationEnd | After the agent subprocess exits | observe only |
onValidationComplete | After the validation script runs | ValidationResult (pass/fail/messages) |
onRectify | After validation fails post-merge | RectifyAction (agent, skip, abort) |
onLoopEnd | Once after the loop exits, regardless of outcome | observe only |
Example plugin
import type { Plugin } from "jsr:@cdaringe/ralphmania";
export const plugin: Plugin = {
onPromptBuilt({ prompt }) {
// Append extra instructions to every agent prompt
return prompt + "\nAlways use TypeScript strict mode.";
},
onLoopEnd({ iterationNum, log }) {
log({ tags: ["info"], message: `Loop ended after ${iterationNum} iterations` });
},
};Local models
For ollama, set customModel via onSessionConfigBuilt. Ralphmania queries /api/show to discover context window and capabilities:
export const plugin: Plugin = {
onSessionConfigBuilt: ({ config }) => ({
...config,
customModel: { kind: "ollama", model: "gemma4:e2b" },
}),
};For other OpenAI-compatible servers (llama.cpp, vLLM, LM Studio), provide contextWindow since there is no discovery API:
export const plugin: Plugin = {
onSessionConfigBuilt: ({ config }) => ({
...config,
customModel: {
kind: "openai-compatible",
baseUrl: "http://localhost:8080/v1",
model: "my-model",
contextWindow: 8192,
},
}),
};Progress Statuses
Each scenario in progress.md carries one of five statuses:
| Status | Meaning |
|---|---|
| WIP | Work in progress — agent has not yet submitted |
| WORK_COMPLETE | Agent submitted a result, awaiting validation by the orchestrator |
| VERIFIED | Validation passed — scenario is done |
| NEEDS_REWORK | Validation failed — scenario will be retried in the next iteration |
| OBSOLETE | The scenario was removed from specification.md |
Environment Variables
| Variable | Purpose |
|---|---|
ANTHROPIC_API_KEY | Required when using the claude agent |
RALPH_LOG | Set to debug for verbose orchestrator logging |