A type of Large Language Model (LLM) that interacts with the Bedrock service. It extends the base LLM class and implements the BaseBedrockInput interface. The class is designed to authenticate and interact with the Bedrock service, which is a part of Amazon Web Services (AWS). It uses AWS credentials for authentication and can be configured with various parameters such as the model to use, the AWS region, and the maximum number of tokens to generate.

The BedrockChat class supports both synchronous and asynchronous interactions with the model, allowing for streaming responses and handling new token callbacks. It can be configured with optional parameters like temperature, stop sequences, and guardrail settings for enhanced control over the generated responses.

import { BedrockChat } from 'path-to-your-bedrock-chat-module';
import { HumanMessage } from '@langchain/core/messages';

async function run() {
// Instantiate the BedrockChat model with the desired configuration
const model = new BedrockChat({
model: "anthropic.claude-v2",
region: "us-east-1",
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
maxTokens: 150,
temperature: 0.7,
stopSequences: ["\n", " Human:", " Assistant:"],
streaming: false,
trace: "ENABLED",
guardrailIdentifier: "your-guardrail-id",
guardrailVersion: "1.0",
guardrailConfig: {
tagSuffix: "example",
streamProcessingMode: "SYNCHRONOUS",
},
});

// Prepare the message to be sent to the model
const message = new HumanMessage("Tell me a joke");

// Invoke the model with the message
const res = await model.invoke([message]);

// Output the response from the model
console.log(res);
}

run().catch(console.error);

For streaming responses, use the following example:

import { BedrockChat } from 'path-to-your-bedrock-chat-module';
import { HumanMessage } from '@langchain/core/messages';

async function runStreaming() {
// Instantiate the BedrockChat model with the desired configuration
const model = new BedrockChat({
model: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-east-1",
credentials: {
accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
},
maxTokens: 150,
temperature: 0.7,
stopSequences: ["\n", " Human:", " Assistant:"],
streaming: true,
trace: "ENABLED",
guardrailIdentifier: "your-guardrail-id",
guardrailVersion: "1.0",
guardrailConfig: {
tagSuffix: "example",
streamProcessingMode: "SYNCHRONOUS",
},
});

// Prepare the message to be sent to the model
const message = new HumanMessage("Tell me a joke");

// Stream the response from the model
const stream = await model.stream([message]);
for await (const chunk of stream) {
// Output each chunk of the response
console.log(chunk);
}
}

runStreaming().catch(console.error);

Hierarchy (view full)

Implements

  • BaseBedrockInput

Constructors

Properties

codec: EventStreamCodec = ...
credentials: CredentialType

AWS Credentials. If no credentials are provided, the default credentials from @aws-sdk/credential-provider-node will be used.

fetchFn: {
    (input: URL | RequestInfo, init?: RequestInit): Promise<Response>;
    (input: RequestInfo, init?: RequestInit): Promise<Response>;
}

A custom fetch function for low-level access to AWS API. Defaults to fetch().

Type declaration

    • (input, init?): Promise<Response>
    • Parameters

      • input: URL | RequestInfo
      • Optionalinit: RequestInit

      Returns Promise<Response>

    • (input, init?): Promise<Response>
    • Parameters

      • input: RequestInfo
      • Optionalinit: RequestInit

      Returns Promise<Response>

guardrailIdentifier: string = ""

Identifier for the guardrail configuration.

guardrailVersion: string = ""

Version for the guardrail configuration.

model: string = "amazon.titan-tg1-large"

Model to use. For example, "amazon.titan-tg1-large", this is equivalent to the modelId property in the list-foundation-models api.

region: string

The AWS region e.g. us-west-2. Fallback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here.

streaming: boolean = false

Whether or not to stream responses

usesMessagesApi: boolean = false
endpointHost?: string

Override the default endpoint hostname.

guardrailConfig?: {
    streamProcessingMode: "SYNCHRONOUS" | "ASYNCHRONOUS";
    tagSuffix: string;
}

Required when Guardrail is in use.

maxTokens?: number = undefined

Max tokens.

modelKwargs?: Record<string, unknown>

Additional kwargs to pass to the model.

stopSequences?: string[]

Use as a call option using .bind() instead.

temperature?: number = undefined

Temperature.

trace?: "ENABLED" | "DISABLED"

Trace settings for the Bedrock Guardrails.

Methods

  • Parameters

    • tools: any[]
    • Optional_kwargs: Partial<unknown>

    Returns Runnable<BaseLanguageModelInput, BaseMessageChunk, this["ParsedCallOptions"]>

  • Parameters

    • Optionaloptions: unknown

    Returns {
        guardrailConfig: undefined | {
            streamProcessingMode: "SYNCHRONOUS" | "ASYNCHRONOUS";
            tagSuffix: string;
        };
        max_tokens: undefined | number;
        modelKwargs: undefined | Record<string, unknown>;
        stop: any;
        temperature: undefined | number;
        tools: AnthropicTool[];
    }

    • guardrailConfig: undefined | {
          streamProcessingMode: "SYNCHRONOUS" | "ASYNCHRONOUS";
          tagSuffix: string;
      }
    • max_tokens: undefined | number
    • modelKwargs: undefined | Record<string, unknown>
    • stop: any
    • temperature: undefined | number
    • tools: AnthropicTool[]