Build Agents
Spawnbase gives you two ways to build agents: describe what you want in the copilot chat, or drag-and-drop nodes on the visual canvas. Most people start with the copilot and refine on the canvas.
The Copilot
Section titled “The Copilot”Open the copilot panel (press A or click the chat icon in the footer) and describe what you want to automate:
Every morning at 9am, pull unresolved Sentry issues from the last 24 hours,group them by root cause, rank by severity, and post a summary to #engineering in SlackThe copilot understands your intent, plans the workflow structure, and builds the nodes on the canvas. You review the plan, approve it, and the agent is ready to configure and deploy.
The copilot can also modify existing agents — ask it to add a step, change a schedule, or swap an integration.
The Canvas
Section titled “The Canvas”The canvas is a visual workspace where your agent’s nodes and connections live. Each node is a step in the workflow, and edges show how data flows between them.
Working with the canvas:
- Add nodes — press
Nto open the node picker, or right-click the canvas - Connect nodes — drag from a node’s output handle (right) to another node’s input handle (left)
- Configure nodes — click any node to open its configuration panel on the left
- Pan and zoom — scroll to zoom, drag the canvas to pan (or switch to select mode with
V) - Annotate — add sticky notes to document your workflow logic
Node Types
Section titled “Node Types”Every agent is built from four types of nodes:
Triggers — When the agent runs
Section titled “Triggers — When the agent runs”Every agent starts with a trigger. It determines when and how the agent executes.
| Trigger | What it does |
|---|---|
| Schedule | Runs on a cron schedule — hourly, daily, weekly, or custom |
| Manual | Runs on demand via button click or API call |
AI Agent — Reasoning and decisions
Section titled “AI Agent — Reasoning and decisions”The AI Agent node is what makes Spawnbase agents intelligent. It calls an LLM (Claude, GPT-4o, Gemini, Grok, and others) to reason about data, make decisions, and generate output.
Without tools, it makes a single LLM call — good for summarizing, classifying, or generating text. With tools enabled (web search, code sandbox, MCP servers), it becomes an autonomous agent that reasons, acts, and iterates until the task is done.
App Actions — Connecting to your tools
Section titled “App Actions — Connecting to your tools”App action nodes call external services — send a Slack message, create a Linear issue, query PostHog analytics, or any of 25+ supported apps.
You pick an app, connect your account (OAuth or API key), choose a tool, and map inputs from upstream nodes.
Annotations
Section titled “Annotations”Sticky notes for documenting your workflow. Non-executable — they’re just for you and your team.
Data Flow
Section titled “Data Flow”Data passes between nodes through template variables. Each node’s output is available to downstream nodes:
{{schedule_trigger.output}}{{sentry_issues.result}}{{ai_summary.text}}Click any node to see its output schema and available fields. The configuration panel shows which variables you can reference.
Testing and Execution
Section titled “Testing and Execution”Before deploying, test your agent to make sure it works:
- Click Run in the header
- Watch execution in real-time in the bottom panel — each node lights up as it runs
- Click any node during or after a run to inspect its input/output
- Fix issues and re-run
The execution panel shows timing, status, and full I/O for every step. If a node fails, you see the exact error and can fix it without guessing.
Keyboard Shortcuts
Section titled “Keyboard Shortcuts”| Shortcut | Action |
|---|---|
A | Open copilot chat |
N | Open node picker |
V | Switch to select mode |
H | Switch to pan mode |
Backspace | Delete selected node |
Cmd + Z | Undo |
? | Show all shortcuts |