Skip to main content

ChatVertexAI

Google Vertex is a service that exposes all foundational models available in Google Cloud, like gemini-1.5-pro, gemini-1.5-flash, etc.

This will help you getting started with ChatVertexAI chat models. For detailed documentation of all ChatVertexAI features and configurations head to the API reference.

Overview​

Integration details​

ClassPackageLocalSerializablePY supportPackage downloadsPackage latest
ChatVertexAI@langchain/google-vertexaiβŒβœ…βœ…NPM - DownloadsNPM - Version

Model features​

See the links in the table headers below for guides on how to use specific features.

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingToken usageLogprobs
βœ…βœ…βŒβœ…βœ…βœ…βœ…βœ…βŒ

Setup​

LangChain.js supports two different authentication methods based on whether you’re running in a Node.js environment or a web environment.

To access ChatVertexAI models you’ll need to setup Google VertexAI in your Google Cloud Platform (GCP) account, save the credentials file, and install the @langchain/google-vertexai integration package.

Credentials​

Head to your GCP account and generate a credentials file. Once you’ve done this set the GOOGLE_APPLICATION_CREDENTIALS environment variable:

export GOOGLE_APPLICATION_CREDENTIALS="path/to/your/credentials.json"

If running in a web environment, you should set the GOOGLE_VERTEX_AI_WEB_CREDENTIALS environment variable as a JSON stringified object, and install the @langchain/google-vertexai-web package:

GOOGLE_VERTEX_AI_WEB_CREDENTIALS={"type":"service_account","project_id":"YOUR_PROJECT-12345",...}

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_API_KEY="your-api-key"

Installation​

The LangChain ChatVertexAI integration lives in the @langchain/google-vertexai package:

yarn add @langchain/google-vertexai

Or if using in a web environment like a Vercel Edge function:

yarn add @langchain/google-vertexai-web

Instantiation​

Now we can instantiate our model object and generate chat completions:

import { ChatVertexAI } from "@langchain/google-vertexai";
// Uncomment the following line if you're running in a web environment:
// import { ChatVertexAI } from "@langchain/google-vertexai-web"

const llm = new ChatVertexAI({
model: "gemini-1.5-pro",
temperature: 0,
maxRetries: 2,
authOptions: {
// ... auth options
},
// other params...
});

Invocation​

const aiMsg = await llm.invoke([
[
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
],
["human", "I love programming."],
]);
aiMsg;
AIMessageChunk {
"content": "J'adore programmer. \n",
"additional_kwargs": {},
"response_metadata": {},
"tool_calls": [],
"tool_call_chunks": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 20,
"output_tokens": 7,
"total_tokens": 27
}
}
console.log(aiMsg.content);
J'adore programmer.

Chaining​

We can chain our model with a prompt template like so:

import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{input}"],
]);

const chain = prompt.pipe(llm);
await chain.invoke({
input_language: "English",
output_language: "German",
input: "I love programming.",
});
AIMessageChunk {
"content": "Ich liebe das Programmieren. \n",
"additional_kwargs": {},
"response_metadata": {},
"tool_calls": [],
"tool_call_chunks": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 15,
"output_tokens": 9,
"total_tokens": 24
}
}

API reference​

For detailed documentation of all ChatVertexAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_google_vertexai.ChatVertexAI.html


Was this page helpful?


You can also leave detailed feedback on GitHub.