Used to check for browser translation.
用于检测浏览器翻译。
ブラウザの翻訳を検出する
Commands Reference

aigne run


The aigne run command is your primary tool for executing an AIGNE Agent. It allows you to run an Agent from your local filesystem, a remote URL, or even engage in an interactive chat session. This command provides extensive options for customizing the execution, from selecting AI models and their parameters to handling input and output.

Before diving in, make sure you have created a project. If not, you can start with the aigne create command.

How It Works#

When you execute aigne run, the CLI performs a series of steps to set up and execute your Agent. The process is designed to be flexible and transparent.

AI ModelAIGNE CoreAIGNE CLIUserAI ModelAIGNE CoreAIGNE CLIUseralt[Path is a URL]aigne run --path ./my-agent --input "Hello"Parse command-line optionsDownload and cache the projectInitialize with specified modelAIGNE instance readyFind entry AgentReturn Agent objectParse input ("Hello")Invoke Agent with inputSend requestReceive responseReturn final resultDisplay formatted output in terminal

Usage#

aigne run [path_or_url] [options]

Options#

The run command offers a wide range of options to control the execution environment, model behavior, and data flow.

General Options#

Option

Description

--path, --url <path_or_url>

Specifies the path to the local agent directory or a URL to a remote AIGNE project. Defaults to the current directory (.).

--entry-agent <name>

The name of the agent to run. If not specified, the CLI runs the first agent it finds in the project.

--chat

Starts an interactive chat loop in the terminal instead of a single execution.

--cache-dir <dir>

Specifies a directory to download and cache remote projects. Defaults to ~/.aigne/.

--verbose

Enables verbose logging for more detailed output.

--log-level <level>

Sets the logging level. Available levels are: DEBUG, INFO, WARN, ERROR. Defaults to INFO.

Model Configuration#

These options allow you to fine-tune the behavior of the underlying AI model.

Option

Description

--model <provider[:model]>

Selects the AI model to use. Format is 'provider[:model]'. For example, 'openai' or 'openai:gpt-4o-mini'. Available providers include OpenAI, Anthropic, Gemini, and more.

--temperature <0.0-2.0>

Controls the randomness of the output. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more deterministic.

--top-p <0.0-1.0>

Nucleus sampling. The model considers only the tokens with the highest probability mass that add up to this value.

--presence-penalty <-2.0-2.0>

Penalizes new tokens based on whether they have appeared in the text so far, encouraging the model to talk about new topics.

--frequency-penalty <-2.0-2.0>

Penalizes new tokens based on their existing frequency in the text, decreasing the model's likelihood to repeat the same line verbatim.

Input and Output#

Option

Description

-i, --input <input...>

Provides input to the agent. You can specify multiple inputs. To read input from a file, use the format @<file_path>.

--format <format>

Specifies the format of the input. Can be text, json, or yaml. Defaults to text.

-o, --output <file>

Writes the agent's output to the specified file instead of printing to the console.

--output-key <key>

Specifies which key from the result object to save to the output file. Defaults to message.

--force

If an output file is specified and it already exists, this option will overwrite it. It also creates the directory path if it doesn't exist.

Examples#

Run an Agent in the Current Directory#

This is the simplest use case. If your terminal's current working directory is an AIGNE project, you can just run:

# The CLI will find and run the first available Agent
aigne run --input "Write a short poem about AI."

Run a Specific Agent from a Path#

If you have multiple agents, you can specify which one to run using the --entry-agent option.

# Run the 'summarizer' agent located in the 'my-agents' directory
aigne run --path ./my-agents --entry-agent summarizer --input "Summarize the provided text..."

Run an Agent from a Remote URL#

AIGNE CLI can directly execute projects hosted in a Git repository or any downloadable tarball URL.

# Run an agent from a GitHub repository
aigne run https://github.com/AIGNE-io/aigne-framework/tree/main/examples/project

Start an Interactive Chat Session#

For conversational agents, the --chat mode provides an interactive loop where you can have a back-and-forth conversation.

# Start a chat session with the 'chat' agent
aigne run --entry-agent chat --chat

The CLI will then present a prompt for you to start the conversation:

> 💬

Use a Different AI Model#

You can easily switch the AI model provider or a specific model version.

# Run the agent using Anthropic's Claude 3 Sonnet model
aigne run --model anthropic:claude-3-sonnet-20240229 --input "What is AIGNE Framework?"

Provide Complex Input from a File#

When your agent expects structured data (JSON or YAML), providing it via a file is often easier.

data.json

{
"article_text": "AIGNE is a framework for building AI agents...",
"summary_length": "one paragraph"
}

Command

# Pass the content of data.json as input
aigne run --entry-agent summarizer --format json --input @data.json

Save the Output to a File#

To save the result for later use, direct it to a file with the --output flag.

# Run the agent and save its output to 'summary.txt'
aigne run --entry-agent summarizer --input "..." -o summary.txt


With the aigne run command, you have a flexible way to test and interact with your agents. Once you've verified your Agent's behavior, you can proceed to write automated checks. Learn how in the aigne test section.