Agents
Agents in almyty are composable AI pipelines that orchestrate LLM calls, tool executions, conditional logic, and data transformations into repeatable workflows. Each agent is defined as a directed acyclic graph (DAG) of nodes connected by edges.
Key Concepts
Pipelines
A pipeline is the core structure of an agent. It consists of:
- Nodes — individual processing steps (LLM calls, tool invocations, transforms, etc.)
- Edges — connections that define data flow between nodes
Every pipeline starts with an Input node and ends with an Output node. Between them, you wire up any combination of the 9 available node types.
Execution Model
When an agent is invoked, almyty:
- Validates the input against the Input node's schema
- Topologically sorts the pipeline nodes
- Executes each node in order, passing data along edges
- Resolves template expressions (
{{nodes.llm_1.output}}) at runtime - Returns the Output node's result
Execution is synchronous by default. Each node runs only after all its upstream dependencies have completed.
Agent Lifecycle
| Status | Description |
|---|---|
draft | Agent is being built, not yet invokable |
active | Agent is live and accepting invocations |
inactive | Agent is paused, invocations will be rejected |
error | Agent has a configuration issue |
Creating an Agent
Via the UI
- Navigate to Agents and click Create Agent
- Give it a name and optional description
- The default pipeline has Input -> LLM Call -> Output nodes
- Use the visual editor to add, remove, and connect nodes
- Configure each node by clicking on it in the canvas
Via the API
curl -X POST https://api.almyty.com/agents \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Research Agent",
"description": "Searches and summarizes information",
"pipeline": {
"nodes": [
{ "id": "input_1", "type": "input", "data": {
"schema": { "type": "object", "properties": { "query": { "type": "string" } }, "required": ["query"] }
}},
{ "id": "llm_1", "type": "llm_call", "data": {
"providerId": "provider-uuid",
"systemPrompt": "You are a research assistant.",
"userPromptTemplate": "Research: {{input.query}}"
}},
{ "id": "output_1", "type": "output", "data": {
"mapping": "{{nodes.llm_1.output}}"
}}
],
"edges": [
{ "id": "e1", "source": "input_1", "target": "llm_1" },
{ "id": "e2", "source": "llm_1", "target": "output_1" }
]
}
}'From a Template
almyty ships with several built-in templates:
| Template | Description |
|---|---|
| Simple Chat | Input -> LLM -> Output. The simplest possible agent. |
| Research Agent | Chains multiple LLM calls with web search tools. |
| Tool-Augmented | LLM with tool calling for dynamic API interaction. |
| Multi-Step Pipeline | Conditional branching with data transformations. |
Select a template when creating an agent to get a pre-wired pipeline.
Invoking an Agent
curl -X POST https://api.almyty.com/agents/{id}/invoke \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"input": { "query": "What is the weather in Berlin?" }
}'Response:
{
"success": true,
"data": {
"output": "The current weather in Berlin is 12°C with partly cloudy skies.",
"executionId": "exec-uuid",
"duration": 2340,
"tokensUsed": 156
}
}Import & Export
Export an agent as JSON for version control or sharing:
# Export
curl https://api.almyty.com/agents/{id}/export \
-H "Authorization: Bearer $TOKEN" > agent.json
# Import
curl -X POST https://api.almyty.com/agents/import \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d @agent.json