██████╗ ██╗ ██╗████████╗ ██████╗██╗ ██╗ ██╔════╝ ██║ ██║╚══██╔══╝██╔════╝██║ ██║ ██║ ███╗██║ ██║ ██║ ██║ ███████║ ██║ ██║██║ ██║ ██║ ██║ ██╔══██║ ╚██████╔╝███████╗██║ ██║ ╚██████╗██║ ██║ ╚═════╝ ╚══════╝╚═╝ ╚═╝ ╚═════╝╚═╝ ╚═╝
// the pipeline memory system //
Brain fields give your pipelines persistent, accumulated context across steps. A step can write insights to the brain store and later steps can read from it — turning a linear pipeline into a knowledge-building loop.
When set on a pipeline or step, injects accumulated brain context into the step's prompt preamble as a <brain> XML block. The AI sees previous insights from earlier steps (or past runs if brain notes were saved). Set at pipeline level to enable for all steps, or override per-step.
When set on a step, the step's output is appended to the brain context store for the current run. Subsequent steps with use_brain: true will see this output in their preamble. Multiple write_brain steps accumulate — context is concatenated in order.
Even when use_brain: true is set at the pipeline level, you can suppress injection for a specific step with use_brain: false. Useful for data-fetch steps where brain context wastes tokens and adds noise.
// annotated YAML examples //
>> Example 1: Brain Feedback Loop — two steps, first writes, second reads
name: brain-feedback-loop
version: "1"
steps:
# Step 1: local model writes an insight to brain
- id: analyze
plugin: ollama
model: qwen2.5-coder:latest
write_brain: true # <- output saved to brain context
prompt: |
Analyze the code in ./src and write one key insight
about its architecture. Start with "INSIGHT:"
# Step 2: cloud model uses that insight
- id: summarize
executor: claude
model: claude-haiku-4-5-20251001
use_brain: true # <- receives step 1's INSIGHT in preamble
prompt: |
Based on the architecture insight above, suggest
one high-impact improvement. >> Example 2: Selective brain injection — pipeline-level on, one step suppressed
name: selective-brain
version: "1"
use_brain: true # <- all steps receive brain context by default
steps:
- id: fetch-data
plugin: gh
use_brain: false # <- suppressed: no brain context for data fetch
vars:
args: "issue list --json number,title --limit 10"
- id: analyze
executor: claude
model: claude-haiku-4-5-20251001
# use_brain: true inherited from pipeline level
prompt: |
Given the brain context above and these issues:
{{steps.fetch-data.output}}
What patterns do you see? // making your AI workspace smarter //
Start writes early, reads late — Put write_brain: true on early exploratory steps (local/fast models), then use use_brain: true on later synthesis steps (cloud/expensive models). You get deep context without burning tokens on every step.
Suppress on pure data-fetch steps — If a step just fetches JSON from an API or runs gh pr list, add use_brain: false. Brain context won't help and wastes the context window.
Name your insights explicitly — Prompt your write_brain steps to start their output with “INSIGHT:” or “FINDING:” so the brain context is scannable. The AI reading it later will extract the signal faster.
Chain local → cloud for cost control — Use a cheap local Ollama model (qwen2.5-coder, llama3.2) to write brain notes, then a cloud model to synthesize. You get the best of both worlds: local privacy + cloud reasoning.
Brain context accumulates within a run — Multiple write_brain steps in the same pipeline run all accumulate into the same context block. Step 3 sees what steps 1 and 2 wrote. Design your pipeline like a conversation that builds knowledge progressively.
// compact field reference //
name string Pipeline name, used in glitch pipeline run <name> version string Schema version, currently "1" use_brain bool Inject brain context into all steps (default: false) write_brain bool Append pipeline output to brain store (default: false) steps[].id string Unique step identifier, used in {{steps.<id>.output}} steps[].executor / plugin string Agent provider (claude, ollama, gh, jq, …) steps[].model string Model ID (e.g. claude-haiku-4-5-20251001, qwen2.5:latest) steps[].prompt string Prompt text; supports {{steps.<id>.output}} interpolation steps[].use_brain bool Per-step override of pipeline-level use_brain steps[].write_brain bool Per-step: save this step's output to brain context