Skip to main content

Getting Started: Chat Models

LangChain provides a standard interface for using chat models. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different. Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.

Chat Messages

A ChatMessage is what we refer to as the modular unit of information for a chat model. At the moment, this consists of a "text" field, which refers to the content of the chat message.

There are currently four different classes of ChatMessage supported by LangChain:

  • HumanChatMessage: A chat message that is sent as if from a Human's point of view.
  • AIChatMessage: A chat message that is sent from the point of view of the AI system to which the Human is corresponding.
  • SystemChatMessage: A chat message that gives the AI system some information about the conversation. This is usually sent at the beginning of a conversation.
  • ChatMessage: A generic chat message, with not only a "text" field but also an arbitrary "role" field.

Note: Currently, the only chat-based model we support is ChatOpenAI (with gpt-4 and gpt-3.5-turbo), but anticipate adding more in the future.

To get started, simply use the call method of an LLM implementation, passing in a string input. In this example, we are using the ChatOpenAI implementation:

import { ChatOpenAI } from "langchain/chat_models/openai";
import { HumanChatMessage } from "langchain/schema";

export const run = async () => {
const chat = new ChatOpenAI();
// Pass in a list of messages to `call` to start a conversation. In this simple example, we only pass in one message.
const response = await chat.call([
new HumanChatMessage(
"What is a good name for a company that makes colorful socks?"
),
]);
console.log(response);
// AIChatMessage { text: '\n\nRainbow Sox Co.' }
};

Dig deeper