Skip to main content

LLM Integration

Dagger's LLM core type includes API methods to attach objects to a Large Language Model (LLM), send prompts, and receive responses.

Prompts

Use the LLM.withPrompt() API method to append prompts to the LLM context:

dagger <<EOF
llm |
with-prompt "What tools do you have available?"
EOF

For longer or more complex prompts, use the LLM.withPromptFile() API method to read the prompt from a text file:

dagger <<EOF
llm |
with-prompt-file ./prompt.txt
EOF

Responses and Variables

Use the LLM.lastReply() API method to obtain the last reply from the LLM

Dagger supports the use of variables in prompts. This allows you to interpolate results of other operations into an LLM prompt:

dagger <<EOF
source=\$(container |
from alpine |
with-directory /src https://github.com/dagger/dagger |
directory /src)
environment=\$(env |
with-directory-input 'source' \$source 'a directory with source code')
llm |
with-env \$environment |
with-prompt "The directory also has some tools available." |
with-prompt "Use the tools in the directory to read the first paragraph of the README.md file in the directory." |
with-prompt "Reply with only the selected text." |
last-reply
EOF
tip

To get the complete message history, use the LLM.History() API method.

Environments

Dagger modules are collections of Dagger Functions. When you give a Dagger module to the LLM core type, every Dagger Function is turned into a tool that the LLM can call.

Environments configure any number of inputs and outputs for the LLM. For example, an environment might provide a Directory, a Container, a custom module, and a string variable. The LLM can use the scalars and the functions of these objects to complete the assigned task.

The documentation for the modules are provided to the LLM, so make sure to provide helpful documentation in your Dagger Functions. The LLM should be able to figure out how to use the tools on its own. Don't worry about describing the objects too much in your prompts because it will be redundant with this automatic documentation.

Consider the following Dagger Function:

package main

import (
"dagger/coding-agent/internal/dagger"
)

type CodingAgent struct{}

// Write a Go program
func (m *CodingAgent) GoProgram(
// The programming assignment, e.g. "write me a curl clone"
assignment string,
) *dagger.Container {
environment := dag.Env().
WithStringInput("assignment", assignment, "the assignment to complete").
WithContainerInput("builder",
dag.Container().From("golang").WithWorkdir("/app"),
"a container to use for building Go code").
WithContainerOutput("completed", "the completed assignment in the Golang container")

work := dag.LLM().
WithEnv(environment).
WithPrompt(`
You are an expert Go programmer with an assignment to create a Go program
Create files in the default directory in $builder
Always build the code to make sure it is valid
Do not stop until your assignment is completed and the code builds
Your assignment is: $assignment
`)

return work.
Env().
Output("completed").
AsContainer()
}

Here, an instance a Container is attached as an input to the Env environment. The Container is a core type with a number of functions useful for a coding environment such as WithNewFile(), File().Contents(), and WithExec(). When this environment is attached to an LLM, the LLM can call any of these Dagger Functions to change the state of the Container and complete the assigned task.

In the Env, a Container instance called completed is specified as a desired output of the LLM. This means that the LLM should return the Container instance as a result of completing its task. The resulting Container object is then available for further processing or for use in other Dagger Functions.