Criteria evaluation chain that requires references.

Hierarchy (view full)

Constructors

Properties

llm: BaseLanguageModelInterface

LLM Wrapper to use

outputKey: string = "text"

Key to use for output, defaults to text

outputParser: BaseLLMOutputParser<EvalOutputType> = ...
prompt: BasePromptTemplate

Prompt object to use

requiresInput: boolean = true
requiresReference: boolean = true
skipReferenceWarning: string = ...
criterionName?: string
evaluationName?: string = ...

The name of the evaluation.

llmKwargs?: any

Kwargs to pass to LLM

memory?: any
skipInputWarning?: string = ...

Accessors

Methods

  • Parameters

    • inputs: ChainValues[]
    • Optionalconfig: any[]

    Returns Promise<ChainValues[]>

    Use .batch() instead. Will be removed in 0.2.0.

    Call the chain on all inputs in the list

  • Run the core logic of this chain and add to output if desired.

    Wraps _call and handles memory.

    Parameters

    • values: any
    • Optionalconfig: any

    Returns Promise<ChainValues>

  • Check if the evaluation arguments are valid.

    Parameters

    • Optionalreference: string

      The reference label.

    • Optionalinput: string

      The input string.

    Returns void

    If the evaluator requires an input string but none is provided, or if the evaluator requires a reference label but none is provided.

  • Evaluate Chain or LLM output, based on optional input and label.

    Parameters

    • args: StringEvaluatorArgs
    • Optionalconfig: any

    Returns Promise<ChainValues>

    The evaluation results containing the score or value. It is recommended that the dictionary contain the following keys:

    • score: the score of the evaluation, if applicable.
    • value: the string value of the evaluation, if applicable.
    • reasoning: the reasoning for the evaluation, if applicable.
  • Invoke the chain with the provided input and returns the output.

    Parameters

    • input: ChainValues

      Input values for the chain run.

    • Optionaloptions: any

    Returns Promise<ChainValues>

    Promise that resolves with the output of the chain run.

  • Format prompt with values and pass to LLM

    Parameters

    • values: any

      keys to pass to prompt template

    • OptionalcallbackManager: any

      CallbackManager to use

    Returns Promise<EvalOutputType>

    Completion from LLM.

    llm.predict({ adjective: "funny" })
    
  • Parameters

    • inputs: Record<string, unknown>
    • outputs: Record<string, unknown>
    • returnOnlyOutputs: boolean = false

    Returns Promise<Record<string, unknown>>

  • Resolve the criteria to evaluate.

    Parameters

    • Optionalcriteria: CriteriaLike

      The criteria to evaluate the runs against. It can be: - a mapping of a criterion name to its description - a single criterion name present in one of the default criteria - a single ConstitutionalPrinciple instance

    Returns Record<string, string>

    A dictionary mapping criterion names to descriptions.