Chroma
This notebook covers how to get started with the Chroma
vector store.
Chroma is a AI-native open-source vector database focused on developer productivity and happiness. Chroma is licensed under Apache 2.0. View the full docs of
Chroma
at this page, and find the API reference for the LangChain integration at this page.
Setup
To access Chroma
vector stores you'll need to install the langchain-chroma
integration package.
pip install -qU "langchain-chroma>=0.1.2"
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
grpcio-tools 1.65.1 requires grpcio>=1.65.1, but you have grpcio 1.63.0 which is incompatible.
grpcio-tools 1.65.1 requires protobuf<6.0dev,>=5.26.1, but you have protobuf 4.25.4 which is incompatible.[0m[31m
[0m
[1m[[0m[34;49mnotice[0m[1;39;49m][0m[39;49m A new release of pip is available: [0m[31;49m24.0[0m[39;49m -> [0m[32;49m24.1.2[0m
[1m[[0m[34;49mnotice[0m[1;39;49m][0m[39;49m To update, run: [0m[32;49mpip install --upgrade pip[0m
Note: you may need to restart the kernel to use updated packages.
Credentials
You can use the Chroma
vector store without any credentials, simply installing the package above is enough!
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
Instantiation
To instantiate a vector store from Chroma
you must provide an embedding function. In this example we will use the OllamaEmbeddings
function since it is open source and does not require payment. You can read about OllamaEmbeddings
at this page.
Basic Instantiation
Below is a basic instantiation, including the use of a directory to save the data locally.
from langchain_chroma import Chroma
from langchain_ollama import OllamaEmbeddings
embedding_function = OllamaEmbeddings(model="llama3")
vector_store = Chroma(
collection_name="example_collection",
embedding_function=embedding_function,
persist_directory="./chroma_db" # Where to save data locally, remove if not neccesary
)
Instantiation from client
You can also instantiate from a Chroma
client, which is particularly useful if you want easier access to the underlying database.
import chromadb
persistent_client = chromadb.PersistentClient()
collection = persistent_client.get_or_create_collection("collection_name")
collection.add(ids=["1", "2", "3"], documents=["a", "b", "c"])
vector_store_from_client = Chroma(
client=persistent_client,
collection_name="collection_name",
embedding_function=embedding_function,
)
Add of existing embedding ID: 1
Add of existing embedding ID: 2
Add of existing embedding ID: 3
Insert of existing embedding ID: 1
Insert of existing embedding ID: 2
Insert of existing embedding ID: 3
Manage vector store
Add items to vector store
We can add documents to our vector store by creating new Document
objects and passing them in a list, along with a corresponding list of ids.
from langchain_core.documents import Document
document_1 = Document(
page_content="foo",
metadata={"source": "https://example.com"}
)
document_2 = Document(
page_content="bar",
metadata={"source": "https://example.com"}
)
document_3 = Document(
page_content="baz",
metadata={"source": "https://example.com"}
)
documents = [document_1, document_2, document_3]
vector_store.add_documents(documents=documents,ids=["1","2","3"])
['1', '2', '3']
Update items in vector store
Now that we have added documents to our vector store, we can update existing documents by using the update_documents
function.
updated_document = Document(
page_content="qux",
metadata={"source": "https://another-example.com"}
)
vector_store.update_document(document_id="1",document=updated_document)
# You can also update miltiple documents at once
vector_store.update_documents(ids=["1","2"],documents=[updated_document,updated_document])
Delete items from vector store
We can also delete items from our vector store as follows:
vector_store.delete(ids=["3"])
Query vector store
Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent.
Query directly
Performing a simple similarity search can be done as follows:
results = vector_store.similarity_search(query="thud",k=1,filter={"source":"https://another-example.com"})
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
* qux [{'source': 'https://another-example.com'}]
If you want to execute a similarity search and receive the corresponding scores you can run:
results = vector_store.similarity_search_with_score(query="thud",k=1,filter={"source":"https://another-example.com"})
for doc, score in results:
print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
* [SIM=11593.251037] qux [{'source': 'https://another-example.com'}]
You can also search by vector:
results = vector_store.similarity_search_by_vector(embedding=embedding_function.embed_query("thud"),k=1,filter={"source":"https://another-example.com"})
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
* qux [{'source': 'https://another-example.com'}]
There are even more search functions, such as searching by image or using maximum marginal relevance search. For a full list of the search functions provided by the Chroma
vectorstore, please visit the API reference.
Query by turning into retriever
You can also transform the vector store into a retriever for easier usage in your chains. For more information on the different search types and kwargs you can pass, please visit the API reference here.
retriever = vector_store.as_retriever(
search_type="mmr",
search_kwargs={"k": 1}
)
retriever.invoke("thud")
Number of requested results 20 is greater than number of elements in index 2, updating n_results = 2
[Document(metadata={'source': 'https://another-example.com'}, page_content='qux')]
Using retriever in a simple RAG chain:
from langchain_openai import ChatOpenAI
from langchain import hub
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
prompt = hub.pull("rlm/rag-prompt")
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
rag_chain.invoke("thud")
Number of requested results 20 is greater than number of elements in index 2, updating n_results = 2
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
"I'm sorry, I don't have enough information to answer that question."
API reference
For detailed documentation of all Chroma
vector store features and configurations head to the API reference: https://api.python.langchain.com/en/latest/vectorstores/langchain_chroma.vectorstores.Chroma.html