Class: OpenAI
llms/openai.OpenAI
Wrapper around OpenAI large language models.
To use you should have the openai
package installed, with the
OPENAI_API_KEY
environment variable set.
Remarks
Any parameters that are valid to be passed to openai.createCompletion
can be passed through modelKwargs, even
if not explicitly available on this class.
Hierarchy
↳
OpenAI
Implements
OpenAIInput
Constructors
constructor
• new OpenAI(fields?
, configuration?
)
Parameters
Name | Type |
---|---|
fields? | Partial <OpenAIInput > & BaseLLMParams & { openAIApiKey? : string } |
configuration? | ConfigurationParameters |
Overrides
Defined in
langchain/src/llms/openai.ts:133
Properties
batchSize
• batchSize: number
= 20
Implementation of
OpenAIInput.batchSize
Defined in
langchain/src/llms/openai.ts:121
bestOf
• bestOf: number
= 1
Implementation of
OpenAIInput.bestOf
Defined in
langchain/src/llms/openai.ts:113
cache
• Optional
cache: BaseCache
<Generation
[]>
Inherited from
Defined in
langchain/src/llms/base.ts:31
callbackManager
• callbackManager: CallbackManager
Inherited from
Defined in
langchain/src/base_language/index.ts:34
caller
• Protected
caller: AsyncCaller
The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.
Inherited from
Defined in
langchain/src/base_language/index.ts:40
frequencyPenalty
• frequencyPenalty: number
= 0
Implementation of
OpenAIInput.frequencyPenalty
Defined in
langchain/src/llms/openai.ts:107
logitBias
• Optional
logitBias: Record
<string
, number
>
Implementation of
OpenAIInput.logitBias
Defined in
langchain/src/llms/openai.ts:115
maxTokens
• maxTokens: number
= 256
Implementation of
OpenAIInput.maxTokens
Defined in
langchain/src/llms/openai.ts:103
modelKwargs
• Optional
modelKwargs: Kwargs
Implementation of
OpenAIInput.modelKwargs
Defined in
langchain/src/llms/openai.ts:119
modelName
• modelName: string
= "text-davinci-003"
Implementation of
OpenAIInput.modelName
Defined in
langchain/src/llms/openai.ts:117
n
• n: number
= 1
Implementation of
OpenAIInput.n
Defined in
langchain/src/llms/openai.ts:111
name
• name: string
The name of the LLM class
Inherited from
Defined in
langchain/src/llms/base.ts:29
presencePenalty
• presencePenalty: number
= 0
Implementation of
OpenAIInput.presencePenalty
Defined in
langchain/src/llms/openai.ts:109
stop
• Optional
stop: string
[]
Implementation of
OpenAIInput.stop
Defined in
langchain/src/llms/openai.ts:125
streaming
• streaming: boolean
= false
Implementation of
OpenAIInput.streaming
Defined in
langchain/src/llms/openai.ts:127
temperature
• temperature: number
= 0.7
Implementation of
OpenAIInput.temperature
Defined in
langchain/src/llms/openai.ts:101
timeout
• Optional
timeout: number
Implementation of
OpenAIInput.timeout
Defined in
langchain/src/llms/openai.ts:123
topP
• topP: number
= 1
Implementation of
OpenAIInput.topP
Defined in
langchain/src/llms/openai.ts:105
verbose
• verbose: boolean
Whether to print out response text.
Inherited from
Defined in
langchain/src/base_language/index.ts:32
Methods
_generate
▸ _generate(prompts
, stop?
): Promise
<LLMResult
>
Call out to OpenAI's endpoint with k unique prompts
Example
import { OpenAI } from "langchain/llms/openai";
const openai = new OpenAI();
const response = await openai.generate(["Tell me a joke."]);
Parameters
Name | Type | Description |
---|---|---|
prompts | string [] | The prompts to pass into the model. |
stop? | string [] | Optional list of stop words to use when generating. |
Returns
Promise
<LLMResult
>
The full LLM output.
Overrides
Defined in
langchain/src/llms/openai.ts:238
_identifyingParams
▸ _identifyingParams(): Object
Get the identifying parameters of the LLM.
Returns
Object
Name | Type |
---|---|
model_name | string |
Overrides
Defined in
langchain/src/llms/openai.ts:208
_llmType
▸ _llmType(): string
Return the string type key uniquely identifying this class of LLM.
Returns
string
Overrides
Defined in
langchain/src/llms/openai.ts:383
_modelType
▸ _modelType(): string
Returns
string
Inherited from
Defined in
langchain/src/llms/base.ts:160
call
▸ call(prompt
, stop?
): Promise
<string
>
Convenience wrapper for generate that takes in a single string prompt and returns a single string output.
Parameters
Name | Type |
---|---|
prompt | string |
stop? | string [] |
Returns
Promise
<string
>
Inherited from
Defined in
langchain/src/llms/base.ts:131
generate
▸ generate(prompts
, stop?
): Promise
<LLMResult
>
Run the LLM on the given propmts an input, handling caching.
Parameters
Name | Type |
---|---|
prompts | string [] |
stop? | string [] |
Returns
Promise
<LLMResult
>
Inherited from
Defined in
langchain/src/llms/base.ts:84
generatePrompt
▸ generatePrompt(promptValues
, stop?
): Promise
<LLMResult
>
Parameters
Name | Type |
---|---|
promptValues | BasePromptValue [] |
stop? | string [] |
Returns
Promise
<LLMResult
>
Inherited from
Defined in
langchain/src/llms/base.ts:44
getNumTokens
▸ getNumTokens(text
): Promise
<number
>
Parameters
Name | Type |
---|---|
text | string |
Returns
Promise
<number
>
Inherited from
Defined in
langchain/src/base_language/index.ts:62
identifyingParams
▸ identifyingParams(): Object
Get the identifying parameters for the model
Returns
Object
Name | Type |
---|---|
model_name | string |
Defined in
langchain/src/llms/openai.ts:219
invocationParams
▸ invocationParams(): CreateCompletionRequest
& Kwargs
Get the parameters used to invoke the model
Returns
CreateCompletionRequest
& Kwargs
Defined in
langchain/src/llms/openai.ts:191
serialize
▸ serialize(): SerializedLLM
Return a json-like object representing this LLM.
Returns
Inherited from
Defined in
langchain/src/llms/base.ts:152
deserialize
▸ Static
deserialize(data
): Promise
<BaseLLM
>
Load an LLM from a json-like object describing it.
Parameters
Name | Type |
---|---|
data | SerializedLLM |
Returns
Promise
<BaseLLM
>
Inherited from
Defined in
langchain/src/llms/base.ts:167