Milvus
Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.
This notebook shows how to use functionality related to the Milvus vector database.
Setupβ
You'll need to install langchain-milvus
with pip install -qU langchain-milvus
to use this integration, we will also download langchain-ollama
package to use for embeddings.
%pip install -qU langchain_milvus langchain-ollama
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
grpcio-tools 1.65.1 requires grpcio>=1.65.1, but you have grpcio 1.63.0 which is incompatible.[0m[31m
[0m
[1m[[0m[34;49mnotice[0m[1;39;49m][0m[39;49m A new release of pip is available: [0m[31;49m24.0[0m[39;49m -> [0m[32;49m24.1.2[0m
[1m[[0m[34;49mnotice[0m[1;39;49m][0m[39;49m To update, run: [0m[32;49mpip install --upgrade pip[0m
Note: you may need to restart the kernel to use updated packages.
The latest version of pymilvus comes with a local vector database Milvus Lite, good for prototyping. If you have large scale of data such as more than a million docs, we recommend setting up a more performant Milvus server on docker or kubernetes.
Credentialsβ
No credentials are needed to use the Milvus
vector store.
Instantiationβ
We will use langchain_ollama
embeddings for this notebook since they are free to use.
from langchain_milvus import Milvus
from langchain_ollama import OllamaEmbeddings
embedding_function = OllamaEmbeddings(model="llama3")
# The easiest way is to use Milvus Lite where everything is stored in a local file.
# If you have a Milvus server you can use the server URI such as "http://localhost:19530".
URI = "./milvus_demo.db"
vector_store = Milvus(
embedding_function=embedding_function,
connection_args={"uri": URI},
)
Compartmentalize the data with Milvus Collectionsβ
You can store different unrelated documents in different collections within same Milvus instance to maintain the context
Here's how you can create a new collection
from langchain_core.documents import Document
vector_store_saved = Milvus.from_documents(
[Document(page_content="foo!")],
embedding_function,
collection_name="langchain_example",
connection_args={"uri": URI},
)
And here is how you retrieve that stored collection
vector_store_loaded = Milvus(
embedding_function,
connection_args={"uri": URI},
collection_name="langchain_example",
)
Manage vector storeβ
Add items to vector storeβ
document_1 = Document(
page_content="foo",
metadata={"source": "https://example.com"}
)
document_2 = Document(
page_content="bar",
metadata={"source": "https://example.com"}
)
document_3 = Document(
page_content="baz",
metadata={"source": "https://example.com"}
)
documents = [document_1, document_2, document_3]
vector_store.add_documents(documents=documents,ids=["1","2","3"])
['1', '2', '3']
Delete items from vector storeβ
vector_store.delete(ids=["3"])
(insert count: 0, delete count: 1, upsert count: 0, timestamp: 0, success count: 0, err count: 0, cost: 0)
Query vector storeβ
Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent.
Query directlyβ
results = vector_store.similarity_search(query="thud",k=1,filter={"source":"https://another-example.com"})
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
* bar [{'pk': '2', 'source': 'https://example.com'}]
If you want to execute a similarity search and receive the corresponding scores you can run:
results = vector_store.similarity_search_with_score(query="thud",k=1,filter={"source":"https://example.com"})
for doc, score in results:
print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
* [SIM=21192.628906] bar [{'pk': '2', 'source': 'https://example.com'}]
For a full list of all the search options available when using the Milvus
vector store, you can visit the API reference.
Query by turning into retrieverβ
You can also transform the vector store into a retriever for easier usage in your chains.
retriever = vector_store.as_retriever(
search_type="mmr",
search_kwargs={"k": 1}
)
retriever.invoke("thud")
[Document(metadata={'pk': '2', 'source': 'https://example.com'}, page_content='bar')]
Using retriever in a simple RAG chain:
WARNINGβ
To run this cell, you need to be able to use the ChatOpenAI
chat model. Learn about how to set it up by visiting this page.
from langchain_openai import ChatOpenAI
from langchain import hub
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
prompt = hub.pull("rlm/rag-prompt")
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
rag_chain.invoke("thud")
'I\'m sorry, I don\'t have enough context to answer the question about "thud" in relation to a bar.'
Per-User Retrievalβ
When building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one user, but for many different users, and they should not be able to see eachotherβs data.
Milvus recommends using partition_key to implement multi-tenancy, here is an example.
The feature of Partition key is now not available in Milvus Lite, if you want to use it, you need to start Milvus server from docker or kubernetes.
from langchain_core.documents import Document
docs = [
Document(page_content="i worked at kensho", metadata={"namespace": "harrison"}),
Document(page_content="i worked at facebook", metadata={"namespace": "ankush"}),
]
vectorstore = Milvus.from_documents(
docs,
embeddings,
connection_args={"uri": URI},
drop_old=True,
partition_key_field="namespace", # Use the "namespace" field as the partition key
)
To conduct a search using the partition key, you should include either of the following in the boolean expression of the search request:
search_kwargs={"expr": '<partition_key> == "xxxx"'}
search_kwargs={"expr": '<partition_key> == in ["xxx", "xxx"]'}
Do replace <partition_key>
with the name of the field that is designated as the partition key.
Milvus changes to a partition based on the specified partition key, filters entities according to the partition key, and searches among the filtered entities.
# This will only get documents for Ankush
vectorstore.as_retriever(search_kwargs={"expr": 'namespace == "ankush"'}).invoke(
"where did i work?"
)
[Document(page_content='i worked at facebook', metadata={'namespace': 'ankush'})]
# This will only get documents for Harrison
vectorstore.as_retriever(search_kwargs={"expr": 'namespace == "harrison"'}).invoke(
"where did i work?"
)
[Document(page_content='i worked at kensho', metadata={'namespace': 'harrison'})]
API referenceβ
For detailed documentation of all ModuleNameVectorStore features and configurations head to the API reference: https://api.python.langchain.com/en/latest/vectorstores/langchain_milvus.vectorstores.milvus.Milvus.html