Class: PromptLayerOpenAI
llms/openai.PromptLayerOpenAI
PromptLayer wrapper to OpenAI
Hierarchy
↳
PromptLayerOpenAI
Constructors
constructor
• new PromptLayerOpenAI(fields?)
Parameters
| Name | Type |
|---|---|
fields? | Partial<OpenAIInput> & BaseLLMParams & { openAIApiKey?: string } & { plTags?: string[] ; promptLayerApiKey?: string } |
Overrides
Defined in
langchain/src/llms/openai.ts:397
Properties
batchSize
• batchSize: number = 20
Inherited from
Defined in
langchain/src/llms/openai.ts:121
bestOf
• bestOf: number = 1
Inherited from
Defined in
langchain/src/llms/openai.ts:113
cache
• Optional cache: BaseCache<Generation[]>
Inherited from
Defined in
langchain/src/llms/base.ts:31
callbackManager
• callbackManager: CallbackManager
Inherited from
Defined in
langchain/src/base_language/index.ts:34
caller
• Protected caller: AsyncCaller
The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.
Inherited from
Defined in
langchain/src/base_language/index.ts:40
frequencyPenalty
• frequencyPenalty: number = 0
Inherited from
Defined in
langchain/src/llms/openai.ts:107
logitBias
• Optional logitBias: Record<string, number>
Inherited from
Defined in
langchain/src/llms/openai.ts:115
maxTokens
• maxTokens: number = 256
Inherited from
Defined in
langchain/src/llms/openai.ts:103
modelKwargs
• Optional modelKwargs: Kwargs
Inherited from
Defined in
langchain/src/llms/openai.ts:119
modelName
• modelName: string = "text-davinci-003"
Inherited from
Defined in
langchain/src/llms/openai.ts:117
n
• n: number = 1
Inherited from
Defined in
langchain/src/llms/openai.ts:111
name
• name: string
The name of the LLM class
Inherited from
Defined in
langchain/src/llms/base.ts:29
plTags
• Optional plTags: string[]
Defined in
langchain/src/llms/openai.ts:395
presencePenalty
• presencePenalty: number = 0
Inherited from
Defined in
langchain/src/llms/openai.ts:109
promptLayerApiKey
• Optional promptLayerApiKey: string
Defined in
langchain/src/llms/openai.ts:393
stop
• Optional stop: string[]
Inherited from
Defined in
langchain/src/llms/openai.ts:125
streaming
• streaming: boolean = false
Inherited from
Defined in
langchain/src/llms/openai.ts:127
temperature
• temperature: number = 0.7
Inherited from
Defined in
langchain/src/llms/openai.ts:101
timeout
• Optional timeout: number
Inherited from
Defined in
langchain/src/llms/openai.ts:123
topP
• topP: number = 1
Inherited from
Defined in
langchain/src/llms/openai.ts:105
verbose
• verbose: boolean
Whether to print out response text.
Inherited from
Defined in
langchain/src/base_language/index.ts:32
Methods
_generate
▸ _generate(prompts, stop?): Promise<LLMResult>
Call out to OpenAI's endpoint with k unique prompts
Example
import { OpenAI } from "langchain/llms/openai";
const openai = new OpenAI();
const response = await openai.generate(["Tell me a joke."]);
Parameters
| Name | Type | Description |
|---|---|---|
prompts | string[] | The prompts to pass into the model. |
stop? | string[] | Optional list of stop words to use when generating. |
Returns
Promise<LLMResult>
The full LLM output.
Inherited from
Defined in
langchain/src/llms/openai.ts:238
_identifyingParams
▸ _identifyingParams(): Object
Get the identifying parameters of the LLM.
Returns
Object
| Name | Type |
|---|---|
model_name | string |
Inherited from
Defined in
langchain/src/llms/openai.ts:208
_llmType
▸ _llmType(): string
Return the string type key uniquely identifying this class of LLM.
Returns
string
Inherited from
Defined in
langchain/src/llms/openai.ts:383
_modelType
▸ _modelType(): string
Returns
string
Inherited from
Defined in
langchain/src/llms/base.ts:160
call
▸ call(prompt, stop?): Promise<string>
Convenience wrapper for generate that takes in a single string prompt and returns a single string output.
Parameters
| Name | Type |
|---|---|
prompt | string |
stop? | string[] |
Returns
Promise<string>
Inherited from
Defined in
langchain/src/llms/base.ts:131
completionWithRetry
▸ completionWithRetry(request, options?): Promise<CreateCompletionResponse>
Parameters
| Name | Type |
|---|---|
request | CreateCompletionRequest |
options? | StreamingAxiosConfiguration |
Returns
Promise<CreateCompletionResponse>
Overrides
OpenAI.completionWithRetry
Defined in
langchain/src/llms/openai.ts:418
generate
▸ generate(prompts, stop?): Promise<LLMResult>
Run the LLM on the given propmts an input, handling caching.
Parameters
| Name | Type |
|---|---|
prompts | string[] |
stop? | string[] |
Returns
Promise<LLMResult>
Inherited from
Defined in
langchain/src/llms/base.ts:84
generatePrompt
▸ generatePrompt(promptValues, stop?): Promise<LLMResult>
Parameters
| Name | Type |
|---|---|
promptValues | BasePromptValue[] |
stop? | string[] |
Returns
Promise<LLMResult>
Inherited from
Defined in
langchain/src/llms/base.ts:44
getNumTokens
▸ getNumTokens(text): Promise<number>
Parameters
| Name | Type |
|---|---|
text | string |
Returns
Promise<number>
Inherited from
Defined in
langchain/src/base_language/index.ts:62
identifyingParams
▸ identifyingParams(): Object
Get the identifying parameters for the model
Returns
Object
| Name | Type |
|---|---|
model_name | string |
Inherited from
Defined in
langchain/src/llms/openai.ts:219
invocationParams
▸ invocationParams(): CreateCompletionRequest & Kwargs
Get the parameters used to invoke the model
Returns
CreateCompletionRequest & Kwargs
Inherited from
Defined in
langchain/src/llms/openai.ts:191
serialize
▸ serialize(): SerializedLLM
Return a json-like object representing this LLM.
Returns
Inherited from
Defined in
langchain/src/llms/base.ts:152
deserialize
▸ Static deserialize(data): Promise<BaseLLM>
Load an LLM from a json-like object describing it.
Parameters
| Name | Type |
|---|---|
data | SerializedLLM |
Returns
Promise<BaseLLM>
Inherited from
Defined in
langchain/src/llms/base.ts:167