LLM Bindings
Dagger can be used as a runtime and programming environment for AI agents. Dagger provides an LLM
core type that enables native integration of Large Language Models (LLM) in your workflows.
A key feature of Dagger's LLM integration is out-of-the-box support for tool calling using Dagger Functions: an LLM can automatically discover and use any and all available Dagger Functions in your workflow. Other benefits include reproducible execution, end-to-end observability, multi-model support, rapid iteration, and easy integration.
Here's an example of Dagger's LLM bindings in action:
- System shell
- Dagger Shell
dagger <<EOF
llm |
with-container \$(container | from alpine) |
with-prompt "You have an alpine container. Install tools to develop with Python." |
container |
terminal
EOF
llm | with-container $(container | from alpine) | with-prompt "You have an alpine container. Install tools to develop with Python." | container | terminal
Prompt mode
Dagger Shell also lets you interact with the attached LLM using natural language commands. Each input builds upon previous interactions, creating a prompt chain that lets you execute complex workflows without needing to know the exact syntax of the underlying Dagger API.
"Prompt mode" can be accessed at any time in the Dagger Shell by typing >
. Here's an example:
Agent loop
Consider the following Dagger Function:
- Go
- Python
- TypeScript
package main
import (
"dagger/coding-agent/internal/dagger"
)
type CodingAgent struct{}
// Write a Go program
func (m *CodingAgent) GoProgram(
// The programming assignment, e.g. "write me a curl clone"
assignment string,
) *dagger.Container {
result := dag.LLM().
WithToyWorkspace(dag.ToyWorkspace()).
WithPromptVar("assignment", assignment).
WithPrompt(`
You are an expert go programmer. You have access to a workspace.
Use the default directory in the workspace.
Do not stop until the code builds.
Do not use the container.
Complete the assignment: $assignment
`).
ToyWorkspace().
Container()
return result
}
import dagger
from dagger import dag, function, object_type
@object_type
class CodingAgent:
@function
def go_program(self, assignment: str) -> dagger.Container:
"""Write a Go program"""
result = (
dag.llm()
.with_toy_workspace(dag.toy_workspace())
.with_prompt_var("assignment", assignment)
.with_prompt("""
You are an expert go programmer. You have access to a workspace.
Use the default directory in the workspace.
Do not stop until the code builds.
Do not use the container.
Complete the assignment: $assignment
""")
.toy_workspace()
.container()
)
return result
import { dag, Container, object, func } from "@dagger.io/dagger"
@object()
export class CodingAgent {
/**
* Write a Go program
*/
@func()
goProgram(
/**
* The programming assignment, e.g. "write me a curl clone"
*/
assignment: string,
): Container {
const result = dag
.llm()
.withToyWorkspace(dag.toyWorkspace())
.withPromptVar("assignment", assignment)
.withPrompt(
`
You are an expert go programmer. You have access to a workspace.
Use the default directory in the workspace.
Do not stop until the code builds.
Do not use the container.
Complete the assignment: $assignment
`,
)
.toyWorkspace()
.container()
return result
}
}
This Dagger Function creates a new LLM, gives it a workspace container with an assignment, and prompts it to complete the assignment. The LLM then runs in a loop, calling tools and iterating on its work, until it completes the assignment. This loop all happens inside of the LLM object, so the value of result
is the workspace container with the completed assignment.
Supported models
Dagger supports a wide range of popular language models, including those from OpenAI, Anthropic and Google. Dagger can access these models either through their respective cloud-based APIs or using local providers like Ollama. Dagger uses your system's standard environment variables to route LLM requests.
Observability
Dagger provides end-to-end tracing of prompts, tool calls, and even low-level system operations. All agent state changes are observable in real time.