AI Agent
The AI Agent node executes LLM calls with optional tool use. Without tools, it makes a single LLM call. With tools enabled, it becomes an autonomous agent that can reason, use tools, and iterate.
Configuration
Section titled “Configuration”Model Selection
Section titled “Model Selection”Select a provider and model:
| Provider | Popular Models |
|---|---|
| Anthropic | claude-sonnet-4-5, claude-opus-4-5, claude-haiku-4-5 |
| OpenAI | gpt-4o, gpt-4-turbo, o1, o3 |
| xAI | grok-3, grok-4 |
| Google AI Studio | gemini-2.5-pro, gemini-2.5-flash |
| Vertex AI | gemini-2.5-pro, gemini-2.5-flash |
| Groq | llama-3.3-70b-versatile, mixtral-8x7b |
| Workers AI | @cf/meta/llama-3.2-3b-instruct |
Default: Anthropic / claude-sonnet-4-5
Prompts
Section titled “Prompts”System Prompt (optional) Sets the AI’s role and behavior. Example:
You are a data analyst. Be concise and factual.User Prompt (required) The task for the AI. Use template variables to reference data from upstream nodes:
Analyze these errors and prioritize by severity:{{sentry.issues}}Assistant Messages (optional) Add example responses to guide output format. Useful for few-shot prompting.
Output Format
Section titled “Output Format”Controls how the AI structures its response:
| Format | Use Case | Configuration |
|---|---|---|
| Text | Summaries, explanations, chat | Default, no config needed |
| One result | Structured data extraction | Define JSON schema |
| Many results | Lists, batch processing | Define element schema |
| Classify | Categorization, routing | Define options list |
Text (Default)
Section titled “Text (Default)”Free-form text response. Best for summaries, explanations, and conversational output.
One Result (Object)
Section titled “One Result (Object)”Returns a single structured object. Define the schema:
{ "type": "object", "properties": { "summary": { "type": "string" }, "severity": { "type": "string", "enum": ["critical", "high", "medium", "low"] }, "affectedUsers": { "type": "number" } }, "required": ["summary", "severity"]}Many Results (Array)
Section titled “Many Results (Array)”Returns a list of structured objects. Define what each element looks like:
{ "type": "object", "properties": { "title": { "type": "string" }, "priority": { "type": "number" } }}Classify (Choice)
Section titled “Classify (Choice)”Model selects from predefined options. Good for routing and categorization:
criticalhighmediumlowAdding tools transforms a single LLM call into an autonomous agent that can reason and take actions.
System Tools
Section titled “System Tools”| Tool | Description |
|---|---|
| Web Fetch | Fetch a URL and read its content as text |
| Web Search | Search the web and get result links |
| Sandbox | Execute bash commands, read/write files |
MCP Servers
Section titled “MCP Servers”Connect Model Context Protocol servers to give the AI access to external tools:
- Sentry (list issues, search events)
- GitHub (create issues, read files)
- Slack (send messages)
- Custom MCP servers from your workspace
When tools are enabled, the agent will:
- Analyze the task
- Decide which tools to use
- Call tools and observe results
- Iterate until the task is complete
Advanced Settings
Section titled “Advanced Settings”Temperature (0-2) Controls randomness. Lower = more deterministic, higher = more creative.
- 0: Deterministic (same input = same output)
- 1: Balanced (default)
- 2: Maximum creativity
Max Steps (1-50) Maximum iterations when tools are enabled. The agent stops when:
- Task is complete
- Max steps reached
- Error occurs
Default: 20 steps
Behavior
Section titled “Behavior”Without Tools
Section titled “Without Tools”Single LLM call:
Input → Prompt → Model → OutputWith Tools
Section titled “With Tools”Autonomous agent loop:
Input → Prompt → Model → [Tool Call → Result]* → OutputThe agent continues calling tools until it determines the task is complete or hits max steps.
Examples
Section titled “Examples”Simple: Summarize Text
Section titled “Simple: Summarize Text”Model: claude-sonnet-4-5Prompt: "Summarize this article in 3 bullet points: {{http.response}}"Output: TextTools: NoneStructured: Extract Data
Section titled “Structured: Extract Data”Model: claude-sonnet-4-5Prompt: "Extract contact info from: {{webhook.body}}"Output: One resultSchema: name: string email: string phone: string (optional)Tools: NoneAgent: Research Task
Section titled “Agent: Research Task”Model: claude-opus-4-5System: "You are a research assistant."Prompt: "Find the latest pricing for {{input.competitor}} and summarize."Output: One resultTools: Web Search, Web FetchMax Steps: 10Classification: Route Tickets
Section titled “Classification: Route Tickets”Model: claude-haiku-4-5Prompt: "Classify this support ticket: {{zendesk.ticket}}"Output: ClassifyOptions: - billing - technical - feature-request - otherBest Practices
Section titled “Best Practices”- Use text output for simple tasks - Don’t over-structure when plain text works
- Use structured output for downstream processing - When other nodes need to parse the result
- Enable sandbox only when needed - It grants filesystem access
- Set reasonable max steps - 5-15 for most tasks, up to 50 for complex research
- Use system prompts for consistent behavior - Define role and constraints upfront
See Also
Section titled “See Also”- Nodes Overview - All node types
- Schedule Trigger - Run agents on a schedule
- Build Agents - Building agents with the canvas