LLM Integration
Dagger's LLM
core type includes API methods to attach objects to a Large Language Model (LLM), send prompts, and receive responses.
Prompts
Use the LLM.withPrompt()
API method to append prompts to the LLM context:
- System shell
- Dagger Shell
dagger <<EOF
llm |
with-directory https://github.com/dagger/dagger#main:/docs |
with-prompt "You have a directory." |
with-prompt "Use the tools in the directory to count the number of Markdown files."
EOF
llm |
with-directory https://github.com/dagger/dagger#main:/docs |
with-prompt "You have a directory." |
with-prompt "Use the tools in the directory to count the number of Markdown files."
For longer or more complex prompts, use the LLM.withPromptFile()
API method to read the prompt from a text file:
- System shell
- Dagger Shell
dagger <<EOF
llm |
with-directory https://github.com/dagger/dagger#main:/docs |
with-prompt-file prompt.txt
EOF
llm |
with-directory https://github.com/dagger/dagger#main:/docs |
with-prompt-file $(host | file ./prompt.txt)
Dagger supports the use of variables in prompts. This allows you to interpolate results of other operations into an LLM prompt:
- System shell
- Dagger Shell
dagger <<EOF
source=\$(container |
from alpine |
with-directory /src https://github.com/dagger/dagger |
directory /src)
contents=\$(llm |
with-directory \$source |
with-prompt "You have a directory with source code." |
with-prompt "The directory also has some tools available." |
with-prompt "Use the tools in the directory to read the first paragraph of the README.md file in the directory." |
with-prompt "Reply with only the selected text." |
last-reply)
llm |
with-prompt "Here is some text: \$contents. Translate it to French." |
last-reply
EOF
source=$(container |
from alpine |
with-directory /src https://github.com/dagger/dagger |
directory /src)
contents=$(llm |
with-directory $source |
with-prompt "You have a directory with source code." |
with-prompt "The directory also has some tools available." |
with-prompt "Use the tools in the directory to read the first paragraph of the README.md file in the directory." |
with-prompt "Reply with only the selected text." |
last-reply)
llm |
with-prompt "Here is some text: $contents. Translate it to French." |
last-reply
Responses
Use the LLM.lastReply()
API method to obtain the last reply from the LLM:
- System shell
- Dagger Shell
dagger <<EOF
llm |
with-container \$(container | from alpine | with-exec apk add curl) |
with-prompt "You have a container with curl installed." |
with-prompt "Use curl to browse docs.dagger.io and summarize in one paragraph the types of documentation available" |
last-reply
EOF
llm |
with-container $(container | from alpine | with-exec apk add curl) |
with-prompt "You have a container with curl installed." |
with-prompt "Use curl to browse docs.dagger.io and summarize in one paragraph the types of documentation available" |
last-reply
To get the complete message history, use the LLM.History()
API method.
Environments and tools
Dagger modules are collections of Dagger Functions. When you give a Dagger module to the LLM
core type, every Dagger Function is turned into a tool that the LLM can call.
An "environment" is a common design pattern that many agents implement. It is a Dagger module that provides one or more Dagger Functions, such as reading and writing files, running tests, and generally interacting with a Directory
, Container
or other core type. Typically, the environment module is passed to the LLM, and the LLM uses the Dagger Functions available in it to complete the assigned tasks.
Consider the following Dagger Function:
- Go
- Python
- TypeScript
package main
import (
"dagger/coding-agent/internal/dagger"
)
type CodingAgent struct{}
// Write a Go program
func (m *CodingAgent) GoProgram(
// The programming assignment, e.g. "write me a curl clone"
assignment string,
) *dagger.Container {
result := dag.LLM().
WithToyWorkspace(dag.ToyWorkspace()).
WithPromptVar("assignment", assignment).
WithPrompt(`
You are an expert go programmer. You have access to a workspace.
Use the default directory in the workspace.
Do not stop until the code builds.
Do not use the container.
Complete the assignment: $assignment
`).
ToyWorkspace().
Container()
return result
}
import dagger
from dagger import dag, function, object_type
@object_type
class CodingAgent:
@function
def go_program(self, assignment: str) -> dagger.Container:
"""Write a Go program"""
result = (
dag.llm()
.with_toy_workspace(dag.toy_workspace())
.with_prompt_var("assignment", assignment)
.with_prompt("""
You are an expert go programmer. You have access to a workspace.
Use the default directory in the workspace.
Do not stop until the code builds.
Do not use the container.
Complete the assignment: $assignment
""")
.toy_workspace()
.container()
)
return result
import { dag, Container, object, func } from "@dagger.io/dagger"
@object()
export class CodingAgent {
/**
* Write a Go program
*/
@func()
goProgram(
/**
* The programming assignment, e.g. "write me a curl clone"
*/
assignment: string,
): Container {
const result = dag
.llm()
.withToyWorkspace(dag.toyWorkspace())
.withPromptVar("assignment", assignment)
.withPrompt(
`
You are an expert go programmer. You have access to a workspace.
Use the default directory in the workspace.
Do not stop until the code builds.
Do not use the container.
Complete the assignment: $assignment
`,
)
.toyWorkspace()
.container()
return result
}
}
Here, the ToyWorkspace
is the Environment
module. It contains a number of Dagger Functions: Read()
, Write(), and
Build(). When an instance of this module is attached to the
LLM` core type, the LLM can call any of these Dagger Functions to change the state of the environment and complete the assigned task.
Dagger's core types, like Directory
and Container
, have extensive APIs, and an LLM can easily get lost or provide inconsistent results when working with these objects. To resolve this, we recommend using a Dagger module with limited functionality as an environment and confining the LLM to that module. With this, the LLM has fewer degrees of freedom when interacting with its environment and this, in turn, helps produce more consistent results.