Available in Classic and VPC
You can use LangChain to easily integrate and use CLOVA Studio's HyperCLOVA X model and embedding tools.
LangChain is an open source framework that supports language model-based applications. You can link various tools, including language models like HyperCLOVA X, vector databases, and search engines, in a chain structure, simplifying the process of connecting features and integrating the development process. LangGraph, an AI agent development framework, can also be utilized through LangChain integrations.
This LangChain integrations guide describes how to install LangChain and configure settings for integrating CLOVA Studio. Additionally, it provides example codes for utilizing CLOVA Studio's HyperCLOVA X model and embedding tools through LangChain for practical references for real world development.
- LangChain is implemented in Python, JavaScript, and TypeScript. CLOVA Studio supports the Python-based version of LangChain, and this guide is also written with a focus on Python.
- LangChain is a trademark of LangChain Inc. All trademark rights are held by LangChain Inc. NAVER Cloud is used in this guide for reference purposes only. This does not imply any sponsorship, endorsement, or partnership between LangChain Inc. and NAVER Cloud.
- LangChain is open source software, and NAVER Cloud does not guarantee or take responsibility for the quality or performance of LangChain. For more information about LangChain, see the LangChain official documentation.
Install and verify LangChain
To use LangChain integrated with CLOVA Studio, Python version 3.9 or higher is required. After you install Python, use the following commands to install LangChain, then install the langchain-naver package necessary for integrations.
pip install -qU langchain # install LangChain
pip install -qU langchain-naver # install LangChain-NAVER integration package
- To properly use all features provided by LangChain, keep the package up to date. Check your installed version regularly and perform updates.
- You may continue to use the
langchain-community(≥ v.0.3.4) package used for existing LangChain integrations, but as of April 17, 2025, technical support is discontinued, and integrations are unavailable except for the following models and APIs. langchain-communitysupported models and APIs: HCX-003 and HCX-DASH-001 (including tuned models), and embedding (including v2)
Verify scope of integrations
CLOVA Studio offers these key features you can use through LangChain:
- Chat mode model in CLOVA Studio's Playground
- Model: Basic model (Example: HCX-005) and tuned models of the basic model
- Associated APIs: Chat completions, chat completions v3, and OpenAI compatibility
- Integration examples: Use HyperCLOVA X model
- Model: Basic model (Example: HCX-005) and tuned models of the basic model
- Embedding (or embedding v2) tool in CLOVA Studio's Explorer
- Model: clir-emb-dolphin, clir-sts-dolphin, and bge-m3
- Associated APIs: Embedding, embedding v2, and OpenAI compatibility
- Integration example: Use embedding tools
- Model: clir-emb-dolphin, clir-sts-dolphin, and bge-m3
Integration settings
To securely use CLOVA Studio features through LangChain, register the API key as an environment variable, which acts as the authentication information required for API calls. You can issue and check your API key from the [API key] menu in CLOVA Studio. Create the service app when you apply it to an actual service.
When you use a service app, you must register the service API key as an environment variable.
To register the authentication information as environment variables:
- Registration through the terminal
export CLOVASTUDIO_API_KEY="<CLOVA-STUDIO-API-KEY>" - Registration through Python
import getpass import os os.environ["CLOVASTUDIO_API_KEY"] = getpass.getpass( "Enter CLOVA Studio API key: " )
- When you use
langchain-community(≤ v.0.3.14), 2 pieces of authentication information are required: API key and API Gateway key. When you use the embedding (including v2) tool, you must additionally register the app ID of the service app as an environment variable.- To view the relevant information, navigate to [Management] > [Apply for service app] on the left of the CLOVA Studio interface, select the service app from the list, click [View code], and check the [Current] tab.
- Registration through the terminal
export NCP_CLOVASTUDIO_API_KEY="<NCP-CLOVASTUDIO-API-KEY>" export NCP_APIGW_API_KEY="<NCP-APIGW-API-KEY>" export NCP_CLOVASTUDIO_APP_ID="<embedding service app ID>" - Registration through Python
import getpass import os os.environ["NCP_CLOVASTUDIO_API_KEY"] = getpass.getpass( "Enter NCP CLOVA STUDIO API key: " ) os.environ["NCP_APIGW_API_KEY"] = getpass.getpass( "Enter NCP API Gateway API key: " ) os.environ["NCP_CLOVASTUDIO_APP_ID"] = input("Enter embedding service ID: ")
Integration examples
This section introduces example codes for integrating LangChain with CLOVA Studio.
- Example using CLOVA Studio's HyperCLOVA X model through LangChain
- Example using CLOVA Studio's embedding tools through LangChain
Use HyperCLOVA X model
The following is an example code for using CLOVA Studio's HyperCLOVA X model through LangChain:
from langchain_naver import ChatClovaX
chat = ChatClovaX(
model="HCX-005" # Enter model name (default value: HCX-005)
)
You can use the following parameters with your request when defining the ChatClovaX class instance:
| Field | Type | Required | Description |
|---|---|---|---|
model |
String | Optional | Model name.
|
temperature |
Float | Optional | Degree of diversity in created tokens.
|
max_tokens |
Integer | Optional | Maximum number of tokens created.
|
max_completion_tokens |
Integer | Optional | Maximum number of tokens created.
|
thinking |
Dict | Optional | Inference model settings information.
|
repeat_penalty |
Float | Optional | Penalty degree for creating the same token.
|
repetition_penalty |
Float | Optional | Penalty degree for creating the same token.
|
stop |
List [String] | Optional | Stop character to terminate token creation.
|
seed |
Integer | Optional | Adjust the consistency level of results when running the model repeatedly.
|
top_k |
Integer | Optional | Sampling from the designated K tokens with the highest probabilities in created token candidates.
|
top_p |
Float | Optional | Sampling based on cumulative probability in the created token candidates.
|
timeout |
Integer | Optional | Timeout (seconds).
|
max_retries |
Integer | Optional | Retry count.
|
api_key |
String | Optional | API Key.
|
base_url |
String | Optional | CLOVA Studio common request API URL.
|
You can run chat models using invoke or stream, LangChain's Chat model class methods. The following are example codes:
invokemessages = [ ( "system", "CLOVA Studio is a development tool that allows you to easily create AI services using the HyperCLOVA X model.", ), ("human", "What is CLOVA Studio?"), ] ai_msg = chat.invoke(messages) ai_msgstreammessages = [ ( "system", "CLOVA Studio is a development tool that allows you to easily create AI services using the HyperCLOVA X model.", ), ("human", "What is CLOVA Studio?"), ] for chunk in chat.stream(messages): print(chunk.content, end="", flush=True)
Additionally, you can use the bind_tools method to implement LangChain's tool calling feature based on CLOVA Studio's feature calling feature. The following are example codes:
from langchain_naver import ChatClovaX
from pydantic import BaseModel, Field
chat = ChatClovaX(
model="HCX-005" # Enter the model name (default value: HCX-005)
max_tokens=1024 # Set the value to more than 1024 when using Function calling
)
class GetWeather(BaseModel):
'''Check the current weather for the given location.'''
location: str = Field(
..., description="The name of the city where you want to check the weather. Example: Gyeonggi-do, Seongnam-si, Bundang-gu"
)
chat_with_tool = chat.bind_tools(
[GetWeather]
)
ai_msg = chat_with_tools.invoke(
"Is it hotter in Bundang or Pangyo?"
)
ai_msg.tool_calls
If you use the thinking model, you can set related features through the thinking parameter when you define the ChatClovaX class instance. You can also check the reasoning process using the following separate method.
from langchain_naver import ChatClovaX
chat = ChatClovaX(
model="HCX-007",
thinking={
"effort": "low" # 'none' (do not use inference), 'low' (default value), 'medium', or 'high'
},
)
ai_msg = chat.invoke("What is 3 cubed?")
print(ai_msg.content) # Final response
print(ai_msg.additional_kwargs["thinking_content"]) # Reasoning process
For more information about how to use CLOVA Studio's chat models through LangChain, see the official documentation.
Use embedding tools
The following is an example code for using CLOVA Studio's embedding (embedding v2) tool through LangChain:
from langchain_naver import ClovaXEmbeddings
embeddings = ClovaXEmbeddings(
model="clir-emb-dolphin", # Model name (default value: clir-emb-dolphin)
)
You can use the following parameters with your request when defining the ClovaXEmbeddings class instance:
| Field | Type | Required | Description |
|---|---|---|---|
model |
String | Optional | Model name.
|
api_key |
String | Optional | API Key.
|
timeout |
Integer | Optional | Timeout (seconds).
|
base_url |
String | Optional | CLOVA Studio common request API URL.
|
You can run chat models using embed_query orembed_documents, LangChain's Embedding model class methods. The following are example codes:
embed_queryquery = "CLOVA Studio is a development tool that allows you to easily create AI services using the HyperCLOVA X model." single_vector = embeddings.embed_query(query)embed_documentstext1="CLOVA Studio is a development tool that allows you to easily create AI services using the HyperCLOVA X model." text2 = "LangChain is an open source framework designed to support the development of language model-based applications." document = [text1, text2] multiple_vector = embeddings.embed_documents(document)
For more information about how to use CLOVA Studio's embedding (embedding v2) tool through LangChain, see the official documentation.