Skip to main content

Elasticsearch

Elasticsearch is a distributed, RESTful search and analytics engine, capable of performing both vector and lexical search. It is built on top of the Apache Lucene library.

This notebook shows how to use functionality related to the Elasticsearch vector store.

Setup

In order to use the Elasticsearch vector search you must install the langchain-elasticsearch package.

%pip install -qU langchain-elasticsearch

[notice] A new release of pip is available: 24.0 -> 24.1.2
[notice] To update, run: pip install --upgrade pip
Note: you may need to restart the kernel to use updated packages.

Credentials

There are two main ways to setup an Elasticsearch instance for use with:

  1. Elastic Cloud: Elastic Cloud is a managed Elasticsearch service. Signup for a free trial.

To connect to an Elasticsearch instance that does not require login credentials (starting the docker instance with security enabled), pass the Elasticsearch URL and index name along with the embedding object to the constructor.

  1. Local Install Elasticsearch: Get started with Elasticsearch by running it locally. The easiest way is to use the official Elasticsearch Docker image. See the Elasticsearch Docker documentation for more information.

Running Elasticsearch via Docker

Example: Run a single-node Elasticsearch instance with security disabled. This is not recommended for production use.

docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.12.1

Running with Authentication

For production, we recommend you run with security enabled. To connect with login credentials, you can use the parameters es_api_key or es_user and es_password.

from langchain_elasticsearch import ElasticsearchStore
from langchain_ollama import OllamaEmbeddings
embedding_function = OllamaEmbeddings(model="llama3")

elastic_vector_search = ElasticsearchStore(
es_url="http://localhost:9200",
index_name="test_index",
embedding=embedding_function,
es_user="elastic",
es_password="changeme"
)

How to obtain a password for the default "elastic" user?

To obtain your Elastic Cloud password for the default "elastic" user:

  1. Log in to the Elastic Cloud console at https://cloud.elastic.co
  2. Go to "Security" > "Users"
  3. Locate the "elastic" user and click "Edit"
  4. Click "Reset password"
  5. Follow the prompts to reset the password

How to obtain an API key?

To obtain an API key:

  1. Log in to the Elastic Cloud console at https://cloud.elastic.co
  2. Open Kibana and go to Stack Management > API Keys
  3. Click "Create API key"
  4. Enter a name for the API key and click "Create"
  5. Copy the API key and paste it into the api_key parameter

Elastic Cloud

To connect to an Elasticsearch instance on Elastic Cloud, you can use either the es_cloud_id parameter or es_url.

elastic_vector_search = ElasticsearchStore(
es_cloud_id="<cloud_id>",
index_name="test_index",
embedding=embedding_function,
es_user="elastic",
es_password="changeme"
)

Instantiation

Elasticsearch is running locally on localhost:9200 with docker. For more details on how to connect to Elasticsearch from Elastic Cloud, see connecting with authentication above.

from langchain_elasticsearch import ElasticsearchStore
from langchain_ollama import OllamaEmbeddings
embedding_function = OllamaEmbeddings(model="llama3")

vector_store = ElasticsearchStore(
"langchain-demo",
embedding=embedding_function,
es_url="http://localhost:9201"
)

Manage vector store

Add items to vector store

from langchain_core.documents import Document

document_1 = Document(
page_content="foo",
metadata={"source": "https://example.com"}
)

document_2 = Document(
page_content="bar",
metadata={"source": "https://another-example.com"}
)

document_3 = Document(
page_content="baz",
metadata={"source": "https://example.com"}
)

documents = [document_1, document_2, document_3]
vector_store.add_documents(
documents,ids=['1','2','3']
)
API Reference:Document
['1', '2', '3']

Delete items from vector store

vector_store.delete(ids=["3"])
True

Query vector store

Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. These examples also show how to use filtering when searching.

Query directly

Performing a simple similarity search can be done as follows:

vector_store.client.indices.refresh(index="langchain-demo")
results = vector_store.similarity_search(query="thud",k=1, filter=[{"term": {"metadata.source.keyword": "https://example.com"}}])
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
* foo [{'source': 'https://example.com'}]

If you want to execute a similarity search and receive the corresponding scores you can run:

results = vector_store.similarity_search_with_score(query="thud",k=1, filter=[{"term": {"metadata.source.keyword": "https://another-example.com"}}])
for doc, score in results:
print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
* [SIM=0.760690] bar [{'source': 'https://another-example.com'}]

Query by turning into retriever

You can also transform the vector store into a retriever for easier usage in your chains.

retriever = vector_store.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.2}
)
retriever.invoke("thud")
[Document(page_content='baz'),
Document(page_content='bar baz'),
Document(page_content='bar'),
Document(metadata={'source': 'https://another-example.com'}, page_content='bar')]

Using retriever in a simple RAG chain:

from langchain_openai import ChatOpenAI
from langchain import hub
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough


llm = ChatOpenAI(model="gpt-3.5-turbo-0125")

prompt = hub.pull("rlm/rag-prompt")

def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)

rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)

rag_chain.invoke("thud")
"I don't know."

FAQ

Question: Im getting timeout errors when indexing documents into Elasticsearch. How do I fix this?

One possible issue is your documents might take longer to index into Elasticsearch. ElasticsearchStore uses the Elasticsearch bulk API which has a few defaults that you can adjust to reduce the chance of timeout errors.

This is also a good idea when you're using SparseVectorRetrievalStrategy.

The defaults are:

  • chunk_size: 500
  • max_chunk_bytes: 100MB

To adjust these, you can pass in the chunk_size and max_chunk_bytes parameters to the ElasticsearchStore add_texts method.

    vector_store.add_texts(
texts,
bulk_kwargs={
"chunk_size": 50,
"max_chunk_bytes": 200000000
}
)

Upgrading to ElasticsearchStore

If you're already using Elasticsearch in your langchain based project, you may be using the old implementations: ElasticVectorSearch and ElasticKNNSearch which are now deprecated. We've introduced a new implementation called ElasticsearchStore which is more flexible and easier to use. This notebook will guide you through the process of upgrading to the new implementation.

What's new?

The new implementation is now one class called ElasticsearchStore which can be used for approximate dense vector, exact dense vector, sparse vector (ELSER), BM25 retrieval and hybrid retrieval, via strategies.

I am using ElasticKNNSearch

Old implementation:


from langchain_community.vectorstores.elastic_vector_search import ElasticKNNSearch

db = ElasticKNNSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)

New implementation:


from langchain_elasticsearch import ElasticsearchStore, DenseVectorStrategy

db = ElasticsearchStore(
es_url="http://localhost:9200",
index_name="test_index",
embedding=embedding,
# if you use the model_id
# strategy=DenseVectorStrategy(model_id="test_model")
# if you use hybrid search
# strategy=DenseVectorStrategy(hybrid=True)
)

I am using ElasticVectorSearch

Old implementation:


from langchain_community.vectorstores.elastic_vector_search import ElasticVectorSearch

db = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)

API Reference:ElasticVectorSearch

New implementation:


from langchain_elasticsearch import ElasticsearchStore, DenseVectorScriptScoreStrategy

db = ElasticsearchStore(
es_url="http://localhost:9200",
index_name="test_index",
embedding=embedding,
strategy=DenseVectorScriptScoreStrategy()
)

db.client.indices.delete(
index="test-metadata, test-elser, test-basic",
ignore_unavailable=True,
allow_no_indices=True,
)
ObjectApiResponse({'acknowledged': True})

API reference

For detailed documentation of all ElasticSearchStore features and configurations head to the API reference: https://api.python.langchain.com/en/latest/vectorstores/langchain_elasticsearch.vectorstores.ElasticsearchStore.html


Was this page helpful?


You can also leave detailed feedback on GitHub.