Skip to main content

Tools

A tool is an abstraction around a function that makes it easy for a language model to interact with it. Specifically, the interface of a tool has a single text input and a single text output. It includes a name and description that communicate to the Model what the tool does and when to use it.

interface Tool {
call(arg: string): Promise<string>;

name: string;

description: string;
}

All Tools

Advanced

To implement a custom tool you can subclass the Tool class and implement the _call method. The _call method is called with the input text and should return the output text. The Tool superclass implements the call method, which takes care of calling the right CallbackManager methods before and after calling your _call method. When an error occurs, the _call method should when possible return a string representing an error, rather than throwing an error. This allows the error to be passed to the LLM and the LLM can decide how to handle it. If an error is thrown then execution of the agent will stop.

abstract class Tool {
abstract _call(arg: string): Promise<string>;

abstract name: string;

abstract description: string;
}

Another option is to create a tool on the fly using a DynamicTool. This is useful if you don't need the overhead of subclassing Tool. The DynamicTool class takes as input a name, a description, and a function. Importantly, the name and the description will be used by the language model to determine when to call this function and with what parameters! So make sure to set these to some values the language model can reason about. The function provided is what will actually be called. When an error occurs, the function should, when possible, return a string representing an error, rather than throwing an error. This allows the error to be passed to the LLM and the LLM can decide how to handle it. If an error is thrown then execution of the agent will stop.

See below for an example of defining and using DynamicTools.

import { OpenAI } from "langchain/llms/openai";
import { initializeAgentExecutor } from "langchain/agents";
import { DynamicTool } from "langchain/tools";

export const run = async () => {
const model = new OpenAI({ temperature: 0 });
const tools = [
new DynamicTool({
name: "FOO",
description:
"call this to get the value of foo. input should be an empty string.",
func: () => "baz",
}),
new DynamicTool({
name: "BAR",
description:
"call this to get the value of bar. input should be an empty string.",
func: () => "baz1",
}),
];

const executor = await initializeAgentExecutor(
tools,
model,
"zero-shot-react-description"
);

console.log("Loaded agent.");

const input = `What is the value of foo?`;

console.log(`Executing with input "${input}"...`);

const result = await executor.call({ input });

console.log(`Got output ${result.output}`);
};