ClovaXEmbeddings
This notebook covers how to get started with embedding models provided by CLOVA Studio. For detailed documentation on ClovaXEmbeddings
features and configuration options, please refer to the API reference.
Overviewโ
Integration detailsโ
Provider | Package |
---|---|
Naver | langchain-community |
Setupโ
Before using embedding models provided by CLOVA Studio, you must go through the three steps below.
- Creating NAVER Cloud Platform account
- Apply to use CLOVA Studio
- Create a CLOVA Studio Test App or Service App of a model to use (See here.)
- Issue a Test or Service API key (See here.)
Credentialsโ
Set the NCP_CLOVASTUDIO_API_KEY
environment variable with your API key.
- Note that if you are using a legacy API Key (that doesn't start with
nv-*
prefix), you might need two additional keys to be set as environment variables (NCP_APIGW_API_KEY
andNCP_CLOVASTUDIO_APP_ID
. They could be found by clickingApp Request Status
>Service App, Test App List
>Details
button for each app in CLOVA Studio.
import getpass
import os
if not os.getenv("NCP_CLOVASTUDIO_API_KEY"):
os.environ["NCP_CLOVASTUDIO_API_KEY"] = getpass.getpass(
"Enter NCP CLOVA Studio API Key: "
)
Uncomment below to use a legacy API key:
# if not os.getenv("NCP_APIGW_API_KEY"):
# os.environ["NCP_APIGW_API_KEY"] = getpass.getpass("Enter NCP API Gateway API Key: ")
# os.environ["NCP_CLOVASTUDIO_APP_ID"] = input("Enter NCP CLOVA Studio App ID: ")
Installationโ
ClovaXEmbeddings integration lives in the langchain_community
package:
# install package
!pip install -U langchain-community
Instantiationโ
Now we can instantiate our embeddings object and embed query or document:
- There are several embedding models available in CLOVA Studio. Please refer here for further details.
- Note that you might need to normalize the embeddings depending on your specific use case.
from langchain_community.embeddings import ClovaXEmbeddings
embeddings = ClovaXEmbeddings(
model="clir-emb-dolphin" # set with the model name of corresponding app id. Default is `clir-emb-dolphin`
)
Indexing and Retrievalโ
Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our RAG tutorials.
Below, see how to index and retrieve data using the embeddings
object we initialized above. In this example, we will index and retrieve a sample document in the InMemoryVectorStore
.
# Create a vector store with a sample text
from langchain_core.vectorstores import InMemoryVectorStore
text = "CLOVA Studio is an AI development tool that allows you to customize your own HyperCLOVA X models."
vectorstore = InMemoryVectorStore.from_texts(
[text],
embedding=embeddings,
)
# Use the vectorstore as a retriever
retriever = vectorstore.as_retriever()
# Retrieve the most similar text
retrieved_documents = retriever.invoke("What is CLOVA Studio?")
# show the retrieved document's content
retrieved_documents[0].page_content
'CLOVA Studio is an AI development tool that allows you to customize your own HyperCLOVA X models.'
Direct Usageโ
Under the hood, the vectorstore and retriever implementations are calling embeddings.embed_documents(...)
and embeddings.embed_query(...)
to create embeddings for the text(s) used in from_texts
and retrieval invoke
operations, respectively.
You can directly call these methods to get embeddings for your own use cases.
Embed single textsโ
You can embed single texts or documents with embed_query
:
single_vector = embeddings.embed_query(text)
print(str(single_vector)[:100]) # Show the first 100 characters of the vector
[-0.094717406, -0.4077411, -0.5513184, 1.6024436, -1.3235079, -1.0720996, -0.44471845, 1.3665184, 0.
Embed multiple textsโ
You can embed multiple texts with embed_documents
:
text2 = "LangChain is the framework for building context-aware reasoning applications"
two_vectors = embeddings.embed_documents([text, text2])
for vector in two_vectors:
print(str(vector)[:100]) # Show the first 100 characters of the vector
[-0.094717406, -0.4077411, -0.5513184, 1.6024436, -1.3235079, -1.0720996, -0.44471845, 1.3665184, 0.
[-0.25525448, -0.84877056, -0.6928286, 1.5867524, -1.2930486, -0.8166254, -0.17934391, 1.4236152, 0.
Additional functionalitiesโ
Service Appโ
When going live with production-level application using CLOVA Studio, you should apply for and use Service App. (See here.)
For a Service App, you should use a corresponding Service API key and can only be called with it.
# Update environment variables
os.environ["NCP_CLOVASTUDIO_API_KEY"] = getpass.getpass(
"Enter NCP CLOVA Studio API Key for Service App: "
)
# Uncomment below to use a legacy API key:
os.environ["NCP_CLOVASTUDIO_APP_ID"] = input("Enter NCP CLOVA Studio Service App ID: ")
embeddings = ClovaXEmbeddings(
service_app=True,
model="clir-emb-dolphin", # set with the model name of corresponding app id of your Service App
)
API Referenceโ
For detailed documentation on ClovaXEmbeddings
features and configuration options, please refer to the API reference.
Relatedโ
- Embedding model conceptual guide
- Embedding model how-to guides