Tools
Aura includes built-in tools that the LLM can invoke during conversations. Most are always present; Skill is conditional (registered only when skills are configured); Done and Ask are added dynamically per session. Tools execute locally and return results to the LLM for reasoning.
Built-in Tools
| Tool | Description | Sandboxable | Parallel |
|---|---|---|---|
| Bash | Shell execution via mvdan/sh interpreter (15s default timeout) | Yes | Yes |
| Ask | Prompt the user with a question and wait for input | No | No |
| Read | File reading with optional line ranges | Yes | Yes |
| Write | Write complete content to a file (new files or full rewrites) | Yes | Yes |
| Glob | File pattern matching with ** recursive support (doublestar) | Yes | Yes |
| Ls | Directory listing with depth control | Yes | Yes |
| Mkdir | Create directories | Yes | Yes |
| Patch | Context-aware diff patching (Add/Update/Delete operations) | Yes | Yes |
| Rg | Regex search via ripgrep with line numbers | Yes | Yes |
| WebFetch | Fetch a URL and convert to markdown, text, or raw HTML | No | Yes |
| WebSearch | Search the web via DuckDuckGo (titles, URLs, snippets) | No | Yes |
| TodoCreate | Create or update a todo list with items | No | No |
| TodoList | Show current todo list with status counts | No | Yes |
| TodoProgress | Update todo item status (auto-promotes next pending) | No | No |
| Vision | Image/PDF analysis via vision-capable LLM | No | Yes |
| Transcribe | Speech-to-text transcription via whisper-compatible server | No | Yes |
| Speak | Text-to-speech synthesis via OpenAI-compatible TTS server | No | Yes |
| Batch | Execute multiple independent tool calls concurrently (1-25 per batch) | No | Yes |
| Task | Delegate work to a subagent with isolated context (opt-in; requires agents with subagent: true) | No | Yes |
| LoadTools | Load deferred tool schemas on demand so they become available for use (auto-injected when deferred tools exist) | No | No |
| MemoryRead | Read, list, or search persistent memory entries | No | Yes |
| MemoryWrite | Persist notes, decisions, and context to disk | No | Yes |
| Diagnostics | LSP diagnostics for a file or workspace (opt-in) | No | Yes |
| LspRestart | Restart all running LSP servers (opt-in) | No | No |
| Query | Embedding-based codebase search | No | Yes |
| Done | Explicit task completion signal | No | No |
| Skill | Invoke LLM-callable skills by name (registered only when skills exist) | No | Yes |
Skills
Skills are LLM-invocable capabilities defined as Markdown files in .aura/skills/. Unlike custom slash commands (which the user types), skills are invoked by the LLM via the Skill tool during a conversation.
Progressive disclosure: Only skill names and one-line descriptions are visible in the tool schema. The full skill body is returned only when the LLM invokes a skill — keeping token overhead flat regardless of how many skills exist.
Format
---
name: commit
description: Review staged changes and create a git commit with a meaningful message
---
Review all staged and unstaged changes using git status and git diff.
Draft a concise commit message that summarizes the changes.
Create the commit.
Behavior
- The
Skilltool is only registered when at least one skill file exists - On
/reload, if all skill files are removed, the tool is cleanly deregistered - Error responses include the list of available skill names for LLM self-correction
- Skills are loaded from
.aura/skills/**/*.md
Memory
The MemoryRead and MemoryWrite tools provide persistent key-value storage backed by markdown files. Memory survives across sessions and compactions.
Two scopes:
| Scope | Path | Purpose |
|---|---|---|
local (default) | .aura/memory/ | Project-specific notes — architecture decisions, codebase patterns |
global | ~/.aura/memory/ | Cross-project notes — user preferences, workflow patterns |
MemoryWrite — create or overwrite a memory entry:
| Param | Required | Description |
|---|---|---|
key | yes | Filename/topic (e.g. architecture, user-preferences) |
content | yes | Markdown content to persist |
scope | no | local (default) or global |
MemoryRead — three modes:
| Mode | Params | Description |
|---|---|---|
| Read | key | Read a specific memory entry |
| List | (none) | List all entries with first-line summaries |
| Search | query | Case-insensitive keyword search across all entries |
.aura/memory/
├── architecture.md
├── conventions.md
└── debug-notes.md
Patch Format
The Patch tool uses a structured patch format wrapped in *** Begin Patch / *** End Patch markers. Three operations are supported:
Add a file (lines prefixed with +):
*** Begin Patch
*** Add File: path/to/new/file.go
+package main
+
+func main() {}
*** End Patch
Update a file (context marker @@, - for removals, + for additions, space for context):
*** Begin Patch
*** Update File: path/to/file.go
@@ func main
-old line
+new line
*** End Patch
Delete a file:
*** Begin Patch
*** Delete File: path/to/file.go
*** End Patch
Multiple operations can be combined in a single patch block:
*** Begin Patch
*** Add File: new.go
+package new
*** Update File: existing.go
@@ func foo
-old
+new
*** Delete File: obsolete.go
*** End Patch
Note: Patch consolidates what used to be 4 separate tools (write, edit, insert, delete). The Update operation uses fuzzy context matching — it finds the closest match for the surrounding context lines. After a successful patch, the modified files are automatically marked as “read” in the filetime tracker, so subsequent patches to the same files do not require a re-read.
Global Tool Filters
Set persistent tool filters in features/tools.yaml:
tools:
enabled: [] # glob patterns for tools to include (empty = all)
disabled: ["mcp__*"] # glob patterns for tools to exclude (empty = none)
CLI flags --include-tools and --exclude-tools override the config when present:
# Only allow read-only tools
aura --include-tools "Read,Glob,Rg,Ls"
# Disable shell and file modification
aura --exclude-tools "Bash,Patch,Mkdir"
Patterns support wildcards (*, Todo*, mcp__*), same syntax as agent/mode tool filters. Environment variables AURA_INCLUDE_TOOLS and AURA_EXCLUDE_TOOLS also work.
Precedence chain: ToolDefs → global filter (config or CLI) → agent → mode → task → opt-in exclusion.
Tool Filtering Pipeline
Every tool call is resolved against a 7-stage filtering pipeline. Understanding it end-to-end explains why a tool is — or isn’t — available in a given context.
Runtime.AllTools (startup: built-in + plugins + MCPs + skills)
│
├─ Global filter ─────────────── applied ONCE at session startup (pre-processing)
│ features/tools.yaml enabled/disabled, or --include-tools/--exclude-tools (from Runtime)
│ Modifies Runtime.AllTools in-place — all subsequent stages see the filtered set.
│
│ ── per-request pipeline (Config.Tools(rt)) ─────────────────────────────
│
├─ Agent filter ──────────────── agent frontmatter tools.enabled / tools.disabled
│
├─ Mode filter ───────────────── mode frontmatter tools.enabled / tools.disabled
│
├─ Task / Extra filter ───────── task definition tools.enabled / tools.disabled
│
├─ Opt-in exclusion ──────────── remove opt_in tools unless explicitly named
│
└─ Deferred split ────────────── eager tools → request.Tools
deferred tools → prompt index (<available-deferred-tools>)
Global filter — features/tools.yaml enabled/disabled fields. CLI flags --include-tools/--exclude-tools (stored in Runtime) override when present. Applied once at session startup before agent/mode resolution. Modifies Runtime.AllTools in-place — all subsequent calls to Config.Tools(rt) already have global filtering baked in. Code: assemble.Tools() in internal/tools/assemble builds the base set; global filtering runs at each call site afterward.
Agent filter — tools.enabled/tools.disabled in agent frontmatter. Scopes the toolset to what this agent should use. A coding agent might enable Bash,Patch,Read,Rg; an ask-only agent enables nothing.
Mode filter — Same fields in mode frontmatter. Further restricts within the agent’s toolset. A review mode might disable Patch and Write while keeping Read and Rg.
Task filter — Same fields in task definitions. Narrowest scope — a task that only needs web access might enable WebFetch,WebSearch and nothing else.
Opt-in exclusion — Tools in the opt_in list are dropped unless explicitly named (not by "*" wildcard) at any layer’s enabled list. Uses Config.CollectEnabled() which aggregates enabled patterns from ALL prior layers (global, agent, mode, task). Code: CollectEnabled() in internal/config/config.go.
Deferred split — Tools matching deferred patterns go to the prompt index instead of request.Tools. Already-loaded tools stay eager. MCP server-level deferred: true flag also applies. The LoadTools meta-tool loads deferred tools on demand. Default is eager. Code: deferred split logic in Config.Tools() in internal/config/config.go.
opt_in vs disabled
disabled: Tool is removed from the set. No layer can re-enable it.opt_in: Tool is registered but hidden by default. Any layer can surface it by naming it explicitly in anenabledlist.
Why Is My Tool Not Available?
Run /tools debug in a session to see all tools with their include/exclude status and the reason for each:
/tools debug
This shows which filtering stage excluded each tool (agent, mode, task, condition, or opt-in).
Manual checklist:
- Is it
disabledat the global level? → Checkfeatures/tools.yamldisabledlist - Is it not in the agent’s
enabledlist? → Check agent frontmatter - Is it filtered by the current mode? → Check mode frontmatter
- Is it
opt_inbut never explicitly enabled? → Add it to anenabledlist by name (not"*") - Is it
deferredand not yet loaded? → CallLoadToolsor checkdeferredpatterns - Is it a plugin tool with
disabled: true? → Checkplugin.yaml - Is it from an MCP server that’s excluded? → Check
--exclude-mcpsand MCP config
Task
The Task tool delegates work to a subagent running in an isolated context. It is opt-in and only registered when at least one agent has subagent: true in its frontmatter.
| Param | Required | Description |
|---|---|---|
description | yes | Short summary of the task (3–5 words) |
prompt | yes | Full task description passed to the subagent |
agent | no | Subagent type to use; defaults to default_agent, then the parent’s agent name |
The tool’s description is generated dynamically and lists the available subagent types so the model knows what specializations exist. Multiple Task calls in a single response execute in parallel.
Feature resolution: All subagents resolve their own features independently (global → child agent → child mode). Feature-derived runner fields (max_steps, result guard token limits) come from the child’s resolved features, not the parent’s. Every subagent gets a fresh, isolated instance — parallel Task calls never share provider or tool state. Subagent token usage and tool call counts are propagated back to the parent’s session stats.
Partial result recovery: When a subagent exhausts its step budget (max_steps), the last assistant response is returned as a successful result with a [budget exhausted after N steps] note appended. When a provider error occurs mid-run, any prior assistant content is recovered and returned with an [interrupted: <error>] note. If no prior content exists, the error propagates normally.
Batch
The Batch tool executes multiple independent tool calls concurrently. It is always registered (not opt-in) and available to any agent with tools enabled.
| Param | Required | Description |
|---|---|---|
calls | yes | Array of 1–25 sub-calls, each with name (tool name) and arguments (map) |
Sub-calls use the same two-pass dispatch as the main pipeline: parallel-safe tools run concurrently, then non-parallel tools run sequentially. The parallel config override, the tool’s Parallel() interface, and the global features.tool_execution.parallel toggle are all respected. Partial failures do not stop other calls — each sub-call reports its own result or error. Results are aggregated into a single markdown response.
Disallowed sub-tools: Batch (no recursion), Ask (blocks for input), Done (signals loop exit), Task (subagent with own loop), LoadTools (triggers state rebuild).
Security: Tool policy (deny/confirm), guardrails, plugin hooks (BeforeToolExecution), sandbox path checks, and user pre/post hooks all run per sub-call. Confirm-policy tools cannot be batched in non-auto mode — they return an error asking the model to call them directly.
Deliberately skipped per sub-call: Conversation tracking, LSP diagnostics, stats, streaming, result guard, AfterToolExecution injectors. These run once for the Batch call itself.
Opt-In Tools
Tools listed in opt_in are registered but hidden unless explicitly enabled by name or narrow glob at any layer (CLI, features, agent, mode, or task). The bare "*" wildcard does not satisfy opt-in.
# features/tools.yaml
tools:
opt_in:
- Ask
- Done
- Gotify
- Diagnostics
- LspRestart
- Speak
- Task
- Transcribe
- WebFetch
- WebSearch
- Write
An opt-in tool becomes available when any enabled list mentions it:
# agents/my-agent.md — Gotify NOT available (enabled: ["*"] doesn't count)
---
tools:
enabled: ["*"]
---
# tasks/notify.yaml — Gotify IS available
notify:
tools:
enabled:
- Gotify
Plugin tools can also be marked opt-in via opt_in: true in plugin.yaml.
Deferred Tools
Tools matching deferred glob patterns in features/tools.yaml, or coming from an MCP server configured with deferred: true, are excluded from request.Tools and therefore not visible to the model by default.
Instead, their names are listed in a lightweight <available-deferred-tools> block injected into the system prompt. This keeps the active tool list small while still advertising what is available.
The LoadTools meta-tool is automatically added to the session when any deferred tools exist. The model calls it to pull specific schemas on demand:
| Param | Required | Description |
|---|---|---|
tools | no | Tool names or glob patterns to load (e.g. mcp__context7__*) |
server | no | MCP server name — loads all deferred tools from that server |
Once loaded, tool schemas persist in session metadata and remain available for the rest of the session.
Filtering order: opt-in filtering runs before the deferred split. A deferred tool that does not pass opt-in is dropped entirely — it will not appear in <available-deferred-tools> and cannot be loaded.
Tool Guards
Tool result size is limited to prevent context overflow:
- Percentage mode (default): Rejects results if projected context usage exceeds
result.max_percentage(default: 95%) - Token mode: Rejects results exceeding
result.max_tokens(default: 20000)
User input messages are also guarded. Messages (typed, @Bash, @File) that would push context above user_input_max_percentage (default: 80%) are rejected before entering the conversation. This prevents unrecoverable context exhaustion where compaction cannot help.
Additional per-tool guards:
read_small_file_tokens(default: 2000) — the Read tool ignores line range parameters and returns the full file when the estimated token count is below this threshold.webfetch_max_body_size(default: 5 MiB) — maximum response body size in bytes for the WebFetch tool. Responses larger than this are truncated.
Configure in .aura/config/features/tools.yaml. See Features Config.
Read-Before Policy
The Read tool tracks which files have been read. Write and Patch tools enforce that existing files must be read before overwriting. This prevents blind overwrites.
Configure in .aura/config/features/tools.yaml:
tools:
read_before:
write: true # require read before overwriting (default: true)
delete: false # require read before deleting (default: false)
Toggle at runtime with /readbefore (alias /rb):
/readbefore → show current state
/readbefore write off → disable write enforcement
/readbefore delete on → enable delete enforcement
/readbefore all off → disable both
Or via /set:
/set readbefore.write=false
/set readbefore.delete=true
Tool Definitions
Tool descriptions, usage instructions, and examples are compiled into the Go source. Optional YAML overrides in .aura/config/tools/**/*.yaml let you tune LLM prompts without recompiling. Set disabled: true to remove tools entirely. Overrides work for all tool types — built-in, plugin, Task, Batch, Ask, and Done.
Custom Tools
Define custom tools as Go plugins — see Plugins.
Tool Base
New tools embed tool.Base, which provides defaults for Description(), Usage(), Examples(), and Available() (true). Optional behaviors are expressed as opt-in interfaces checked via type assertion — implement only the ones your tool needs:
tool.PreHook— validation before execution (e.g. read-before-write)tool.PostHook— state updates after execution (e.g. filetime recording)tool.PathDeclarer— declares filesystem paths for sandbox pre-filteringtool.SandboxOverride— overrides the default sandboxable=truetool.ParallelOverride— overrides the default parallel=truetool.LSPAware— tool output benefits from LSP diagnosticstool.Overrider— plugin tools that replace built-in tools with the same nametool.Closer— cleanup on session endtool.Previewer— generates a diff preview for confirmation dialogs (used by Patch and Write)
Registration Tiers
Tools are registered at three points in the startup and runtime lifecycle:
| Tier | When | Tools |
|---|---|---|
| All | Startup — tools.All() | All built-in tools + plugin tools |
| rebuildState | On agent/mode/model switch | Done, Ask, LoadTools — injected based on runtime state |
| Dynamic | Conditional | Skill — only when skill files exist; Task — only when subagents are configured; Batch — always registered |
Done and Ask are removed and re-added on every rebuildState call so they respect the current agent/mode filter. LoadTools is injected when deferred tools exist. Batch is always registered at startup (not gated by agent config like Task). These all flow through the same Config.Tools() filtering pipeline as built-in tools.
Execution Pipeline
Tool execution uses a three-phase pipeline. Pass 1 registers all tool calls from a single LLM response upfront as Pending, then three phases handle pre-flight, execution, and post-processing.
Phase A — Sequential pre-flight. Each tool call runs through all gates in order. Denied tools are committed immediately; surviving tools collect into a prepared batch.
| Step | Description |
|---|---|
| 0b | BeforeToolExecution plugin hooks — modify args or block execution |
| 0c | Arg re-validation — if plugin modified args, re-validate against tool JSON schema |
| 0d | Message injection — plugin messages injected only after block + validation pass |
| 1 | Pre() — tool’s own pre-execution hook (e.g. read-before-write guard) |
| 2 | Tool policy — auto / confirm / deny check |
| 2b | Guardrail — secondary LLM validation (if configured) |
| 3 | Path pre-filter — fast-fail if declared paths fall outside sandbox bounds |
| 3b | User pre-hooks — shell hooks from hooks/*.yaml (can block) |
Phase B — Parallel execution. Tools that declare Parallel() == true (the default) are dispatched concurrently via errgroup. Non-parallel tools (Ask, Done, TodoCreate, TodoProgress, LoadTools, LspRestart) run sequentially after all parallel tools complete. Disable parallel execution globally or per-agent with features.tools.parallel: false. Per-tool override via parallel: in Tool Definitions — config wins over code-level Parallel().
| Step | Description |
|---|---|
| Execute | Execute() — sandboxed re-exec or direct call. Sandboxed path pipes sdk.Context as JSON via stdin; child reads it and injects into Go context before execution |
Phase C — Sequential post-processing. All results are processed in original order. Thread-unsafe operations (builder, stats, injectors) are confined here.
| Step | Description |
|---|---|
| 4 | Post() — tool’s own post-execution hook |
| 4b | User post-hooks — shell hooks appending feedback to output |
| 4d | LSP diagnostics — compiler diagnostics appended to output (after hooks) |
| 5 | Result guard — reject oversized results before they enter history |
| 6 | AfterToolExecution plugin hooks — can modify output or inject messages |
aura tools calls Execute() directly and does not run steps 0b, 3b, 4b, or 6 (those require the conversation pipeline).
Streaming Output
Bash output is streamed incrementally to the UI as the command runs. Each complete line from stdout or stderr is emitted as a ToolOutputDelta event, throttled at 200ms to prevent flooding. This works in both execution paths:
- Direct — a
StreamingWriterwraps theLimitedBuffer, splitting on newlines and invoking the stream callback. - Sandboxed — the child process writes
\x00STREAM:prefixed lines to stderr; the parent goroutine reads them viaStderrPipeand dispatchesToolOutputDeltaevents.
All four UI backends handle the event: TUI shows the latest line below the spinner, Simple updates the spinner suffix, Headless prints to stderr, and Web broadcasts an SSE tool.output event.
The final tool result (returned to the LLM) is still the complete buffered output, unchanged. Streaming is purely a UI concern.
Output Truncation
Bash output goes through two truncation stages:
-
Byte cap — stdout and stderr are each capped at
max_output_bytes(default: 1MB) during capture. This prevents OOM from unbounded output (binary dumps, base64, minified JSON). When the cap is hit, excess bytes are discarded and a marker is appended. The full output is not available (it was never captured). -
Line truncation — output exceeding
max_lines(default: 200) is middle-truncated: the firsthead_linesand lasttail_linesare kept, with a separator showing how many lines were omitted and a path to the full output file. The LLM can use Read or Rg on the saved file to access specific sections.
Configure in features/tools.yaml:
bash:
truncation:
max_output_bytes: 1048576 # 1MB per stream; 0 = disabled
max_lines: 200
head_lines: 100
tail_lines: 80
Command Rewrite
The bash.rewrite template rewrites every Bash tool command before execution. The template receives {{ .Command }} (the original command) and sprig functions. Empty = no rewrite.
Configure in features/tools.yaml:
bash:
rewrite: "rtk {{ .Command }}"
Use cases:
- Tool wrapping:
rtk {{ .Command }} - Environment setup:
source .venv/bin/activate && {{ .Command }} - Containerized execution:
docker exec -i mycontainer sh -c '{{ .Command }}' - Logging:
{{ .Command }} | tee /tmp/aura-bash.log
The rewrite applies inside Execute(), so it works in both aura run (conversation) and aura tools (direct execution) paths. This is a key difference from plugin BeforeToolExecution hooks, which only fire in the conversation pipeline (aura run) — aura tools calls Execute() directly and does not run injector hooks. If you need Bash command transformation that works everywhere, use bash.rewrite. If you only need it during conversations, a plugin hook is equivalent.
When both are active, they compose: the plugin BeforeToolExecution hook modifies args["command"] first, then bash.rewrite transforms the already-modified command inside Execute().
Pre-hooks see the original command; post-hooks see the rewritten result. The @Bash[...] directive creates a fresh Bash tool with the same rewrite template, so rewrites DO apply to directives.
Failed Tool Call Pruning
When a tool call fails (tool not found, policy block, parse error, etc.), the error message is injected into the conversation as a tool result. These ephemeral errors are automatically pruned from history after one turn — preventing stale error messages from permanently consuming context tokens. The pruning removes both the failed tool call and its error result as a pair, maintaining provider API safety.
Persistent Approval Rules
When a tool call requires confirmation (via confirm policy) and the user approves it, the approval can be saved as a persistent rule or a session-scoped rule:
- Session: The approval is stored in-memory and auto-approves matching patterns for the duration of the process. Cleared on exit.
- Project: Stored in
.aura/config/rules/approvals.yaml. Persists across sessions. File writes are protected by advisory file locking (flock) for concurrent-instance safety. - Global: Stored in
~/.aura/config/rules/approvals.yaml. Persists across sessions and projects.
Persistent approvals (project and global) are merged into the auto tier of the effective tool policy — approved patterns auto-execute on subsequent runs without re-prompting.
Approval Pattern Derivation
When a user approves a tool call, the derived pattern is scoped to the tool’s primary argument:
- Bash:
"Bash:git commit*"— command prefix (multi-word commands likegit commitkeep the subcommand) - File tools (Read, Write, Glob, Rg, Ls, Vision, Transcribe):
"Write:/tmp/*"— directory of the path argument - Other tools (Patch, Mkdir, MCP/plugin tools):
"Patch"— bare tool name (no argument extraction)
Session approvals use exact map key lookup (same-directory only). Persisted approvals use wildcard matching (crosses subdirectories). A bare-name pattern like "Write" in approvals.yaml still matches all Write calls regardless of path.
Confirmation Dialog
Confirmation dialogs display the tool name, a one-line description of what the tool does, and the call detail (file path, command, etc.). For file-modifying tools (Patch, Write), a unified diff preview shows exactly what will change.
- TUI: A scrollable pager with syntax-highlighted diff. Keybindings:
aAllow,sSession,pProject,gGlobal,dDeny. Arrow keys and pgup/pgdn scroll. - Simple: Highlighted diff printed above the numbered confirm menu.
- Web: Diff rendered in the confirmation dialog with syntax highlighting.
- Headless: No change — auto-approves immediately.
Preview generation is best-effort. If the preview fails (e.g. file not found), the confirmation falls back to the detail-only display (file path).
Parallel Execution
When the LLM emits multiple tool calls in a single response, independent tools run concurrently. Tools opt into sequential execution by implementing the tool.ParallelOverride interface and returning false from Parallel(). The default is parallel (tools that don’t implement the interface run concurrently). Non-parallel tools (Ask, Done, TodoCreate, TodoProgress, LoadTools, LspRestart) run sequentially after all parallel tools complete. The Batch tool uses the same two-pass dispatch for its sub-calls — all three levels (global toggle, config override, code interface) are respected.
Disable globally or per-agent in features/tools.yaml:
parallel: false # sequential execution (default: true)
Override per-tool in Tool Definitions — config wins over code-level Parallel():
# .aura/config/tools/bash.yaml
bash:
parallel: false # force sequential despite code declaring parallel-safe
Max Steps
After max_steps iterations (default: 50), tools are disabled and the LLM must respond with text only. Configure in features/tools.yaml. Override per-run with --max-steps or per-task with max_steps: in task definitions.
Token Budget
token_budget sets a cumulative token limit (input + output) for the session. Once reached, the assistant stops immediately. Default: 0 (disabled). Configure in features/tools.yaml, override per-run with --token-budget (env: AURA_TOKEN_BUDGET), or per-task in task definitions.
Unlike max_steps (which allows one final text-only response), the token budget is a hard stop with no grace period — cumulative spend doesn’t benefit from a final response.