Class: HuggingFaceInference
llms/hf.HuggingFaceInference
LLM class that provides a simpler interface to subclass than BaseLLM.
Requires only implementing a simpler _call method instead of _generate.
Hierarchy
↳
HuggingFaceInference
Implements
HFInput
Constructors
constructor
• new HuggingFaceInference(fields?
)
Parameters
Name | Type |
---|---|
fields? | Partial <HFInput > & BaseLLMParams |
Overrides
Defined in
langchain/src/llms/hf.ts:43
Properties
apiKey
• apiKey: undefined
| string
= undefined
Implementation of
HFInput.apiKey
Defined in
langchain/src/llms/hf.ts:41
cache
• Optional
cache: BaseCache
<Generation
[]>
Inherited from
Defined in
langchain/src/llms/base.ts:31
callbackManager
• callbackManager: CallbackManager
Inherited from
Defined in
langchain/src/base_language/index.ts:34
caller
• Protected
caller: AsyncCaller
The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.
Inherited from
Defined in
langchain/src/base_language/index.ts:40
frequencyPenalty
• frequencyPenalty: undefined
| number
= undefined
Implementation of
HFInput.frequencyPenalty
Defined in
langchain/src/llms/hf.ts:39
maxTokens
• maxTokens: undefined
| number
= undefined
Implementation of
HFInput.maxTokens
Defined in
langchain/src/llms/hf.ts:33
model
• model: string
= "gpt2"
Implementation of
HFInput.model
Defined in
langchain/src/llms/hf.ts:29
name
• name: string
The name of the LLM class
Inherited from
Defined in
langchain/src/llms/base.ts:29
temperature
• temperature: undefined
| number
= undefined
Implementation of
HFInput.temperature
Defined in
langchain/src/llms/hf.ts:31
topK
• topK: undefined
| number
= undefined
Implementation of
HFInput.topK
Defined in
langchain/src/llms/hf.ts:37
topP
• topP: undefined
| number
= undefined
Implementation of
HFInput.topP
Defined in
langchain/src/llms/hf.ts:35
verbose
• verbose: boolean
Whether to print out response text.
Inherited from
Defined in
langchain/src/base_language/index.ts:32
Methods
_call
▸ _call(prompt
, _stop?
): Promise
<string
>
Run the LLM on the given prompt and input.
Parameters
Name | Type |
---|---|
prompt | string |
_stop? | string [] |
Returns
Promise
<string
>
Overrides
Defined in
langchain/src/llms/hf.ts:69
_generate
▸ _generate(prompts
, stop?
): Promise
<LLMResult
>
Run the LLM on the given prompts and input.
Parameters
Name | Type |
---|---|
prompts | string [] |
stop? | string [] |
Returns
Promise
<LLMResult
>
Inherited from
Defined in
langchain/src/llms/base.ts:197
_identifyingParams
▸ _identifyingParams(): Record
<string
, any
>
Get the identifying parameters of the LLM.
Returns
Record
<string
, any
>
Inherited from
Defined in
langchain/src/llms/base.ts:140
_llmType
▸ _llmType(): string
Return the string type key uniquely identifying this class of LLM.
Returns
string
Overrides
Defined in
langchain/src/llms/hf.ts:65
_modelType
▸ _modelType(): string
Returns
string
Inherited from
Defined in
langchain/src/llms/base.ts:160
call
▸ call(prompt
, stop?
): Promise
<string
>
Convenience wrapper for generate that takes in a single string prompt and returns a single string output.
Parameters
Name | Type |
---|---|
prompt | string |
stop? | string [] |
Returns
Promise
<string
>
Inherited from
Defined in
langchain/src/llms/base.ts:131
generate
▸ generate(prompts
, stop?
): Promise
<LLMResult
>
Run the LLM on the given propmts an input, handling caching.
Parameters
Name | Type |
---|---|
prompts | string [] |
stop? | string [] |
Returns
Promise
<LLMResult
>
Inherited from
Defined in
langchain/src/llms/base.ts:84
generatePrompt
▸ generatePrompt(promptValues
, stop?
): Promise
<LLMResult
>
Parameters
Name | Type |
---|---|
promptValues | BasePromptValue [] |
stop? | string [] |
Returns
Promise
<LLMResult
>
Inherited from
Defined in
langchain/src/llms/base.ts:44
getNumTokens
▸ getNumTokens(text
): Promise
<number
>
Parameters
Name | Type |
---|---|
text | string |
Returns
Promise
<number
>
Inherited from
Defined in
langchain/src/base_language/index.ts:62
serialize
▸ serialize(): SerializedLLM
Return a json-like object representing this LLM.
Returns
Inherited from
Defined in
langchain/src/llms/base.ts:152
deserialize
▸ Static
deserialize(data
): Promise
<BaseLLM
>
Load an LLM from a json-like object describing it.
Parameters
Name | Type |
---|---|
data | SerializedLLM |
Returns
Promise
<BaseLLM
>
Inherited from
Defined in
langchain/src/llms/base.ts:167
imports
▸ Static
imports(): Promise
<{ HfInference
: typeof HfInference
}>
Returns
Promise
<{ HfInference
: typeof HfInference
}>
Defined in
langchain/src/llms/hf.ts:88