Skip to main content

Pinecone

Pinecone is a vector database with broad functionality.

This notebook shows how to use functionality related to the Pinecone vector database.

Setup

To use the PineconeVectorStore you first need to install the partner package, as well as the other packages used throughout this notebook.

%pip install -qU langchain-pinecone pinecone-notebooks langchain-ollama

Migration note: if you are migrating from the langchain_community.vectorstores implementation of Pinecone, you may need to remove your pinecone-client v2 dependency before installing langchain-pinecone, which relies on pinecone-client v3.

Credentials

Create a new Pinecone account, or sign into your existing one, and create an API key to use in this notebook.

import getpass
import os
import time

from pinecone import Pinecone, ServerlessSpec

if not os.getenv("PINECONE_API_KEY"):
os.environ["PINECONE_API_KEY"] = getpass.getpass("Enter your Pinecone API key: ")

pinecone_api_key = os.environ.get("PINECONE_API_KEY")

pc = Pinecone(api_key=pinecone_api_key)

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"

Instantiation

Before initializing our vector store, let's connect to a Pinecone index. If one named index_name doesn't exist, it will be created.

import time

index_name = "langchain-index" # change if desired

existing_indexes = [index_info["name"] for index_info in pc.list_indexes()]

if index_name not in existing_indexes:
pc.create_index(
name=index_name,
dimension=4096,
metric="cosine",
spec=ServerlessSpec(cloud="aws", region="us-east-1"),
)
while not pc.describe_index(index_name).status["ready"]:
time.sleep(1)

index = pc.Index(index_name)

Now that our Pinecone index is setup, we can initialize our vector store. We are using the langchain-ollama embeddings since it does not require payment or an API key. For information on OllamaEmbeddings read this page.

from langchain_pinecone import PineconeVectorStore
from langchain_ollama import OllamaEmbeddings

embedding_function= OllamaEmbeddings(model="llama3")

vector_store = PineconeVectorStore(index=index, embedding=embedding_function)

Manage vector store

Add items to vector store

from langchain_core.documents import Document

document_1 = Document(
page_content="foo",
metadata={"source": "https://example.com"}
)

document_2 = Document(
page_content="bar",
metadata={"source": "https://another-example.com"}
)

document_3 = Document(
page_content="baz",
metadata={"source": "https://example.com"}
)

documents = [document_1, document_2, document_3]

vector_store.add_documents(documents=documents,ids=["1","2","3"])
API Reference:Document
['1', '2', '3']

Delete items from vector store

vector_store.delete(ids=["3"])

Query vector store

Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent.

Query directly

Performing a simple similarity search can be done as follows:

results = vector_store.similarity_search(query="thud",k=1,filter={"source":"https://another-example.com"})
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
* bar [{'source': 'https://another-example.com'}]

If you want to execute a similarity search and receive the corresponding scores you can run:

results = vector_store.similarity_search_with_score(query="thud",k=1,filter={"source":"https://example.com"})
for doc, score in results:
print(f"* [SIM={score:3f}] {doc.page_content} [{doc.metadata}]")
* [SIM=0.425751] foo [{'source': 'https://example.com'}]

There are more search methods (such as MMR) not listed in this notebook, to find all of them be sure to read the API reference.

Query by turning into retriever

You can also transform the vector store into a retriever for easier usage in your chains.

retriever = vector_store.as_retriever(
search_type="mmr",
search_kwargs={"k": 1}
)
retriever.invoke("thud")
[Document(metadata={'source': 'https://another-example.com'}, page_content='bar')]

Using retriever in a simple RAG chain:

WARNING

To run this cell, you need to be able to use the ChatOpenAI chat model. Learn about how to set it up by visiting this page.

from langchain_openai import ChatOpenAI
from langchain import hub
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough


llm = ChatOpenAI(model="gpt-3.5-turbo-0125")

prompt = hub.pull("rlm/rag-prompt")

def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)

rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)

rag_chain.invoke("thud")
"I'm sorry, I don't have enough information to answer that question."

API reference

For detailed documentation of all ModuleNameVectorStore features and configurations head to the API reference: https://api.python.langchain.com/en/latest/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html


Was this page helpful?


You can also leave detailed feedback on GitHub.