Class: BaseLLM
llms/base.BaseLLM
LLM Wrapper. Provides an call (an generate) function that takes in a prompt (or prompts) and returns a string.
Hierarchyβ
Constructorsβ
constructorβ
β’ new BaseLLM(Β«destructuredΒ»)
Parametersβ
| Name | Type |
|---|---|
Β«destructuredΒ» | BaseLLMParams |
Overridesβ
Defined inβ
langchain/src/llms/base.ts:33
Propertiesβ
cacheβ
β’ Optional cache: BaseCache<Generation[]>
Defined inβ
langchain/src/llms/base.ts:31
callbackManagerβ
β’ callbackManager: CallbackManager
Inherited fromβ
BaseLanguageModel.callbackManager
Defined inβ
langchain/src/base_language/index.ts:34
callerβ
β’ Protected caller: AsyncCaller
The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.
Inherited fromβ
Defined inβ
langchain/src/base_language/index.ts:40
nameβ
β’ name: string
The name of the LLM class
Defined inβ
langchain/src/llms/base.ts:29
verboseβ
β’ verbose: boolean
Whether to print out response text.
Inherited fromβ
Defined inβ
langchain/src/base_language/index.ts:32
Methodsβ
_generateβ
βΈ Abstract _generate(prompts, stop?): Promise<LLMResult>
Run the LLM on the given prompts and input.
Parametersβ
| Name | Type |
|---|---|
prompts | string[] |
stop? | string[] |
Returnsβ
Promise<LLMResult>
Defined inβ
langchain/src/llms/base.ts:57
_identifyingParamsβ
βΈ _identifyingParams(): Record<string, any>
Get the identifying parameters of the LLM.
Returnsβ
Record<string, any>
Overridesβ
BaseLanguageModel._identifyingParams
Defined inβ
langchain/src/llms/base.ts:140
_llmTypeβ
βΈ Abstract _llmType(): string
Return the string type key uniquely identifying this class of LLM.
Returnsβ
string
Overridesβ
Defined inβ
langchain/src/llms/base.ts:147
_modelTypeβ
βΈ _modelType(): string
Returnsβ
string
Overridesβ
Defined inβ
langchain/src/llms/base.ts:160
callβ
βΈ call(prompt, stop?): Promise<string>
Convenience wrapper for generate that takes in a single string prompt and returns a single string output.
Parametersβ
| Name | Type |
|---|---|
prompt | string |
stop? | string[] |
Returnsβ
Promise<string>
Defined inβ
langchain/src/llms/base.ts:131
generateβ
βΈ generate(prompts, stop?): Promise<LLMResult>
Run the LLM on the given propmts an input, handling caching.
Parametersβ
| Name | Type |
|---|---|
prompts | string[] |
stop? | string[] |
Returnsβ
Promise<LLMResult>
Defined inβ
langchain/src/llms/base.ts:84
generatePromptβ
βΈ generatePrompt(promptValues, stop?): Promise<LLMResult>
Parametersβ
| Name | Type |
|---|---|
promptValues | BasePromptValue[] |
stop? | string[] |
Returnsβ
Promise<LLMResult>
Overridesβ
BaseLanguageModel.generatePrompt
Defined inβ
langchain/src/llms/base.ts:44
getNumTokensβ
βΈ getNumTokens(text): Promise<number>
Parametersβ
| Name | Type |
|---|---|
text | string |
Returnsβ
Promise<number>
Inherited fromβ
BaseLanguageModel.getNumTokens
Defined inβ
langchain/src/base_language/index.ts:62
serializeβ
βΈ serialize(): SerializedLLM
Return a json-like object representing this LLM.
Returnsβ
Overridesβ
Defined inβ
langchain/src/llms/base.ts:152
deserializeβ
βΈ Static deserialize(data): Promise<BaseLLM>
Load an LLM from a json-like object describing it.
Parametersβ
| Name | Type |
|---|---|
data | SerializedLLM |
Returnsβ
Promise<BaseLLM>
Overridesβ
Defined inβ
langchain/src/llms/base.ts:167