Skip to main content
Couchbase is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications. Couchbase embraces AI with coding assistance for developers and vector search for their applications. Couchbase provides two different vector store implementations for LangChain:
Vector StoreIndex TypeMinimum VersionBest For
CouchbaseQueryVectorStoreHyperscale Vector Index or Composite Vector IndexCouchbase Server 8.0+Large-scale pure vector searches or searches combining vector similarity with scalar filters
CouchbaseSearchVectorStoreSearch Vector IndexCouchbase Server 7.6+Hybrid searches combining vector similarity with Full-Text Search (FTS) and geospatial searches
This tutorial explains how to use Vector Search in Couchbase. You can work with either Couchbase Capella or your self-managed Couchbase Server.

Setup

To access the Couchbase vector stores you first need to install the langchain-couchbase partner package:
pip install langchain-couchbase langchain-openai langchain-community

Credentials

Head over to the Couchbase website and create a new connection, making sure to save your database username and password. You will also need an OpenAI API key for the embeddings. Get one from OpenAI.
import getpass
import os

COUCHBASE_CONNECTION_STRING = getpass.getpass(
    "Enter the connection string for the Couchbase cluster: "
)
DB_USERNAME = getpass.getpass("Enter the username for the Couchbase cluster: ")
DB_PASSWORD = getpass.getpass("Enter the password for the Couchbase cluster: ")
OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key: ")

os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
Enter the connection string for the Couchbase cluster:  ········
Enter the username for the Couchbase cluster:  ········
Enter the password for the Couchbase cluster:  ········
Enter your OpenAI API key:  ········
If you want to get best in-class automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass()

Create Couchbase Connection Object

We create a connection to the Couchbase cluster initially and then pass the cluster object to the Vector Store. Here, we are connecting using the username and password from above. You can also connect using any other supported way to your cluster. For more information on connecting to the Couchbase cluster, please check the documentation.
from datetime import timedelta

from couchbase.auth import PasswordAuthenticator
from couchbase.cluster import Cluster
from couchbase.options import ClusterOptions

auth = PasswordAuthenticator(DB_USERNAME, DB_PASSWORD)
options = ClusterOptions(auth)
options.apply_profile("wan_development")
cluster = Cluster(COUCHBASE_CONNECTION_STRING, options)

# Wait until the cluster is ready for use.
cluster.wait_until_ready(timedelta(seconds=5))
We will now set the bucket, scope, and collection names in the Couchbase cluster that we want to use for Vector Search. For this example, we are using the default scope & collections.
BUCKET_NAME = "langchain_bucket"
SCOPE_NAME = "_default"
COLLECTION_NAME = "_default"

CouchbaseQueryVectorStore

CouchbaseQueryVectorStore enables the usage of Couchbase for Vector Search using the Query and Indexing Service. It supports two different types of vector indexes:
  • Hyperscale Vector Index - Optimized for pure vector searches on large datasets (billions of documents). Best for content discovery, recommendations, and applications requiring high accuracy with low memory footprint. Hyperscale Vector indexes compare vectors and scalar values simultaneously.
  • Composite Vector Index - Combines a Global Secondary Index (GSI) with a vector column. Ideal for searches combining vector similarity with scalar filters where scalars filter out large portions of the dataset. Composite Vector indexes apply scalar filters first, then perform vector searches on the filtered results.
For guidance on choosing the right index type, see Choose the Right Vector Index. Requirements: Couchbase Server version 8.0 and above. For more information on indexes, see:

Initialization

Below, we create the vector store object with the cluster information and the distance metric. First, set up the embeddings (if not already done):
from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
Then create the vector store:
from langchain_couchbase import CouchbaseQueryVectorStore
from langchain_couchbase.vectorstores import DistanceStrategy

vector_store = CouchbaseQueryVectorStore(
    cluster=cluster,
    bucket_name=BUCKET_NAME,
    scope_name=SCOPE_NAME,
    collection_name=COLLECTION_NAME,
    embedding=embeddings,
    distance_metric=DistanceStrategy.DOT,
)

Distance Strategies

The CouchbaseQueryVectorStore supports the following distance strategies via the DistanceStrategy enum:
StrategyDescription
DistanceStrategy.DOTDot product similarity
DistanceStrategy.COSINECosine similarity
DistanceStrategy.EUCLIDEANEuclidean distance (equivalent to L2)
DistanceStrategy.EUCLIDEAN_SQUAREDSquared Euclidean distance (equivalent to L2_SQUARED)

Specify the Text & Embeddings Field

You can optionally specify the text & embeddings field for the document using the text_key and embedding_key fields.
vector_store_specific = CouchbaseQueryVectorStore(
    cluster=cluster,
    bucket_name=BUCKET_NAME,
    scope_name=SCOPE_NAME,
    collection_name=COLLECTION_NAME,
    embedding=embeddings,
    distance_metric=DistanceStrategy.COSINE,
    text_key="text",
    embedding_key="embedding",
)

Manage vector store

Once you have created your vector store, we can interact with it by adding and deleting different items. Add items to vector store We can add items to our vector store by using the add_documents function.
from uuid import uuid4

from langchain_core.documents import Document

document_1 = Document(page_content="foo", metadata={"baz": "bar"})
document_2 = Document(page_content="thud", metadata={"bar": "baz"})
document_3 = Document(page_content="i will be deleted :(")

documents = [document_1, document_2, document_3]
ids = ["1", "2", "3"]
vector_store.add_documents(documents=documents, ids=ids)
Create Vector Index Important: The vector index must be created after adding documents to the vector store. Use the create_index() method after adding your documents to enable efficient vector searches.
from langchain_couchbase.vectorstores import IndexType

# Create a Hyperscale Vector Index
vector_store.create_index(
    index_type=IndexType.HYPERSCALE,
    index_description="IVF,SQ8",
)
Or create a Composite Vector Index:
# Create a Composite Vector Index
vector_store.create_index(
    index_type=IndexType.COMPOSITE,
    index_description="IVF,SQ8",
)
Delete items from vector store
vector_store.delete(ids=["3"])

Query vector store

Similarity search Performing a simple similarity search can be done as follows:
results = vector_store.similarity_search(query="thud", k=1)
for doc in results:
    print(f"* {doc.page_content} [{doc.metadata}]")
* thud [{'bar': 'baz'}]
Similarity search with filter You can filter results using a SQL++ WHERE clause with the where_str parameter:
results = vector_store.similarity_search(
    query="thud", k=1, where_str="metadata.bar = 'baz'"
)
for doc in results:
    print(f"* {doc.page_content} [{doc.metadata}]")
* thud [{'bar': 'baz'}]
Similarity search with score You can fetch the distance scores for the results by calling the similarity_search_with_score method. Lower distances indicate more similar documents.
results = vector_store.similarity_search_with_score(query="qux", k=1)
for doc, score in results:
    print(f"* [DIST={score:3f}] {doc.page_content} [{doc.metadata}]")
* [DIST=-0.500724] foo [{'baz': 'bar'}]

Async Operations

CouchbaseQueryVectorStore supports async operations:
# add documents
await vector_store.aadd_documents(documents=documents, ids=ids)

# delete documents
await vector_store.adelete(ids=["3"])

# search
results = await vector_store.asimilarity_search(query="thud", k=1)

# search with score
results = await vector_store.asimilarity_search_with_score(query="qux", k=1)
for doc, score in results:
    print(f"* [DIST={score:3f}] {doc.page_content} [{doc.metadata}]")
* [DIST=-0.500724] foo [{'baz': 'bar'}]

Use as Retriever

You can transform the vector store into a retriever:
retriever = vector_store.as_retriever(
    search_kwargs={"k": 1, "fetch_k": 2, "lambda_mult": 0.5},
)
retriever.invoke("thud")
[Document(id='2', metadata={'bar': 'baz'}, page_content='thud')]

Create from texts

You can create a CouchbaseQueryVectorStore directly from a list of texts:
texts = ["hello", "world"]

vectorstore = CouchbaseQueryVectorStore.from_texts(
    texts,
    embedding=embeddings,
    cluster=cluster,
    bucket_name=BUCKET_NAME,
    scope_name=SCOPE_NAME,
    collection_name=COLLECTION_NAME,
    distance_metric=DistanceStrategy.COSINE,
)

CouchbaseSearchVectorStore

CouchbaseSearchVectorStore enables the usage of Couchbase for Vector Search using Search Vector Indexes. Search Vector Indexes combine a Couchbase Search index with a vector column, allowing hybrid searches that combine vector searches with Full-Text Search (FTS) and geospatial searches. Requirements: Couchbase Server version 7.6 and above. For details on how to create a Search index with support for Vector fields, please refer to the documentation:

Search Index Field Mappings for This Tutorial

To follow along with the examples in this documentation, your Search index should include mappings for the following fields:
FieldTypeDescription
texttextThe document text content
embeddingvectorThe vector embedding field (dimensions: 3072 for text-embedding-3-large)
metadataobject (child mapping)The metadata object with child fields like source, author, rating, date
Notes:
  • The vector field dimensions must match your embedding model (3072 for text-embedding-3-large used in this tutorial)
  • The metadata child fields (source, author, rating, date) are needed for the hybrid query examples
  • You can customize field names using the text_key and embedding_key parameters when initializing the vector store

Initialization

Below, we create the vector store object with the cluster information and the search index name. First, set up the embeddings:
from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
Then create the vector store:
from langchain_couchbase import CouchbaseSearchVectorStore

SEARCH_INDEX_NAME = "langchain-test-index"

vector_store = CouchbaseSearchVectorStore(
    cluster=cluster,
    bucket_name=BUCKET_NAME,
    scope_name=SCOPE_NAME,
    collection_name=COLLECTION_NAME,
    embedding=embeddings,
    index_name=SEARCH_INDEX_NAME,
)

Specify the text & embeddings field

You can optionally specify the text & embeddings field for the document using the text_key and embedding_key fields.
vector_store_specific = CouchbaseSearchVectorStore(
    cluster=cluster,
    bucket_name=BUCKET_NAME,
    scope_name=SCOPE_NAME,
    collection_name=COLLECTION_NAME,
    embedding=embeddings,
    index_name=SEARCH_INDEX_NAME,
    text_key="text",
    embedding_key="embedding",
)

Manage vector store

Once you have created your vector store, we can interact with it by adding and deleting different items. Add items to vector store We can add items to our vector store by using the add_documents function.
from uuid import uuid4

from langchain_core.documents import Document

document_1 = Document(
    page_content="I had chocolate chip pancakes and scrambled eggs for breakfast this morning.",
    metadata={"source": "tweet"},
)

document_2 = Document(
    page_content="The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.",
    metadata={"source": "news"},
)

document_3 = Document(
    page_content="Building an exciting new project with LangChain - come check it out!",
    metadata={"source": "tweet"},
)

document_4 = Document(
    page_content="Robbers broke into the city bank and stole $1 million in cash.",
    metadata={"source": "news"},
)

document_5 = Document(
    page_content="Wow! That was an amazing movie. I can't wait to see it again.",
    metadata={"source": "tweet"},
)

document_6 = Document(
    page_content="Is the new iPhone worth the price? Read this review to find out.",
    metadata={"source": "website"},
)

document_7 = Document(
    page_content="The top 10 soccer players in the world right now.",
    metadata={"source": "website"},
)

document_8 = Document(
    page_content="LangGraph is the best framework for building stateful, agentic applications!",
    metadata={"source": "tweet"},
)

document_9 = Document(
    page_content="The stock market is down 500 points today due to fears of a recession.",
    metadata={"source": "news"},
)

document_10 = Document(
    page_content="I have a bad feeling I am going to get deleted :(",
    metadata={"source": "tweet"},
)

documents = [
    document_1,
    document_2,
    document_3,
    document_4,
    document_5,
    document_6,
    document_7,
    document_8,
    document_9,
    document_10,
]
uuids = [str(uuid4()) for _ in range(len(documents))]

vector_store.add_documents(documents=documents, ids=uuids)
['f125b836-f555-4449-98dc-cbda4e77ae3f',
 'a28fccde-fd32-4775-9ca8-6cdb22ca7031',
 'b1037c4b-947f-497f-84db-63a4def5080b',
 'c7082b74-b385-4c4b-bbe5-0740909c01db',
 'a7e31f62-13a5-4109-b881-8631aff7d46c',
 '9fcc2894-fdb1-41bd-9a93-8547747650f4',
 'a5b0632d-abaf-4802-99b3-df6b6c99be29',
 '0475592e-4b7f-425d-91fd-ac2459d48a36',
 '94c6db4e-ba07-43ff-aa96-3a5d577db43a',
 'd21c7feb-ad47-4e7d-84c5-785afb189160']
Delete items from vector store
vector_store.delete(ids=[uuids[-1]])
True

Query vector store

Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent. Similarity search Performing a simple similarity search can be done as follows:
results = vector_store.similarity_search(
    "LangChain provides abstractions to make working with LLMs easy",
    k=2,
)
for res in results:
    print(f"* {res.page_content} [{res.metadata}]")
* Building an exciting new project with LangChain - come check it out! [{'source': 'tweet'}]
* LangGraph is the best framework for building stateful, agentic applications! [{'source': 'tweet'}]
Similarity search with Score You can also fetch the scores for the results by calling the similarity_search_with_score method.
results = vector_store.similarity_search_with_score("Will it be hot tomorrow?", k=1)
for res, score in results:
    print(f"* [SIM={score:3f}] {res.page_content} [{res.metadata}]")
* [SIM=0.553213] The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees. [{'source': 'news'}]

Filtering results

You can filter the search results by specifying any filter on the text or metadata in the document that is supported by the Couchbase Search service. The filter can be any valid SearchQuery supported by the Couchbase Python SDK. These filters are applied before the Vector Search is performed. If you want to filter on one of the fields in the metadata, you need to specify it using . For example, to fetch the source field in the metadata, you need to specify metadata.source. Note that the filter needs to be supported by the Search Index.
from couchbase import search

query = "Are there any concerning financial news?"
filter_on_source = search.MatchQuery("news", field="metadata.source")
results = vector_store.similarity_search_with_score(
    query, fields=["metadata.source"], filter=filter_on_source, k=5
)
for res, score in results:
    print(f"* {res.page_content} [{res.metadata}] {score}")
* The stock market is down 500 points today due to fears of a recession. [{'source': 'news'}] 0.38733142614364624
* Robbers broke into the city bank and stole $1 million in cash. [{'source': 'news'}] 0.20637883245944977
* The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees. [{'source': 'news'}] 0.10403035581111908

Specifying fields to return

You can specify the fields to return from the document using fields parameter in the searches. These fields are returned as part of the metadata object in the returned Document. You can fetch any field that is stored in the Search index. The text_key of the document is returned as part of the document’s page_content. If you do not specify any fields to be fetched, all the fields stored in the index are returned. If you want to fetch one of the fields in the metadata, you need to specify it using . For example, to fetch the source field in the metadata, you need to specify metadata.source.
query = "What did I eat for breakfast today?"
results = vector_store.similarity_search(query, fields=["metadata.source"])
print(results[0])
page_content='I had chocolate chip pancakes and scrambled eggs for breakfast this morning.' metadata={'source': 'tweet'}

Query by turning into retriever

You can also transform the vector store into a retriever for easier usage in your chains. Here is how to transform your vector store into a retriever and then invoke the retreiever with a simple query and filter.
retriever = vector_store.as_retriever(
    search_type="similarity",
    search_kwargs={"k": 1, "score_threshold": 0.5},
)
filter_on_source = search.MatchQuery("news", field="metadata.source")
retriever.invoke("Stealing from the bank is a crime", filter=filter_on_source)
[Document(id='b480c9c6-b7df-4a22-ac2e-19287af7562d', metadata={'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.')]

Hybrid queries

Couchbase allows you to do hybrid searches by combining Vector Search results with searches on non-vector fields of the document like the metadata object. The results will be based on the combination of the results from both Vector Search and the searches supported by Search Service. The scores of each of the component searches are added up to get the total score of the result. To perform hybrid searches, there is an optional parameter, search_options that can be passed to all the similarity searches. The different search/query possibilities for the search_options can be found here. Create Diverse Metadata for Hybrid Search In order to demonstrate hybrid search, let us create documents with diverse metadata. We add three fields to the metadata: date between 2010 & 2020, rating between 1 & 5, and author set to either John Doe or Jane Doe.
from langchain_core.documents import Document

# Create documents with diverse metadata for hybrid search examples
hybrid_docs = [
    Document(
        page_content="The new AI model shows impressive performance on benchmark tests.",
        metadata={"source": "tech", "date": "2019-01-01", "rating": 5, "author": "John Doe"},
    ),
    Document(
        page_content="Stock markets showed mixed results today with tech sector leading gains.",
        metadata={"source": "finance", "date": "2017-01-01", "rating": 3, "author": "Jane Doe"},
    ),
    Document(
        page_content="The annual developer conference announced new framework updates.",
        metadata={"source": "tech", "date": "2018-01-01", "rating": 4, "author": "John Doe"},
    ),
    Document(
        page_content="Weather patterns indicate a mild winter ahead for the region.",
        metadata={"source": "weather", "date": "2016-01-01", "rating": 2, "author": "Jane Doe"},
    ),
    Document(
        page_content="The new smartphone release features advanced camera technology.",
        metadata={"source": "tech", "date": "2020-01-01", "rating": 4, "author": "John Doe"},
    ),
    Document(
        page_content="Economic indicators suggest steady growth in the coming quarter.",
        metadata={"source": "finance", "date": "2017-01-01", "rating": 3, "author": "Jane Doe"},
    ),
]

vector_store.add_documents(hybrid_docs)

query = "Tell me about technology news"
results = vector_store.similarity_search(query)
print(results[0].metadata)
{'author': 'John Doe', 'date': '2020-01-01', 'rating': 4, 'source': 'tech'}
Query by Exact Value We can search for exact matches on a textual field like the author in the metadata object.
query = "What are the latest technology updates?"
results = vector_store.similarity_search(
    query,
    search_options={"query": {"field": "metadata.author", "match": "John Doe"}},
    fields=["metadata.author"],
)
print(results[0])
page_content='The new smartphone release features advanced camera technology.' metadata={'author': 'John Doe'}
Query by Partial Match We can search for partial matches by specifying a fuzziness for the search. This is useful when you want to search for slight variations or misspellings of a search query. Here, “Jae” is close (fuzziness of 1) to “Jane”.
query = "What are the financial market updates?"
results = vector_store.similarity_search(
    query,
    search_options={
        "query": {"field": "metadata.author", "match": "Jae", "fuzziness": 1}
    },
    fields=["metadata.author"],
)
print(results[0])
page_content='Stock markets showed mixed results today with tech sector leading gains.' metadata={'author': 'Jane Doe'}
Query by Date Range Query We can search for documents that are within a date range query on a date field like metadata.date.
query = "What happened in the markets?"
results = vector_store.similarity_search(
    query,
    search_options={
        "query": {
            "start": "2016-12-31",
            "end": "2018-01-02",
            "inclusive_start": True,
            "inclusive_end": False,
            "field": "metadata.date",
        }
    },
)
print(results[0])
page_content='Stock markets showed mixed results today with tech sector leading gains.' metadata={'author': 'Jane Doe', 'date': '2017-01-01', 'rating': 3, 'source': 'finance'}
Query by Numeric Range Query We can search for documents that are within a range for a numeric field like metadata.rating.
query = "What are the economic indicators for the coming quarter?"
results = vector_store.similarity_search_with_score(
    query,
    search_options={
        "query": {
            "min": 4,
            "max": 5,
            "inclusive_min": True,
            "inclusive_max": True,
            "field": "metadata.rating",
        }
    },
)
print(results[0])
(Document(id='6aeb8413bce340bc893f175cefbb64b3', metadata={'author': 'Jane Doe', 'date': '2017-01-01', 'rating': 3, 'source': 'finance'}, page_content='Economic indicators suggest steady growth in the coming quarter.'), 0.7944117188453674)
Combining Multiple Search Queries Different search queries can be combined using AND (conjuncts) or OR (disjuncts) operators. In this example, we are checking for documents with a rating between 3 & 4 and dated in 2017.
query = "Tell me about finance"
results = vector_store.similarity_search_with_score(
    query,
    search_options={
        "query": {
            "conjuncts": [
                {"min": 3, "max": 4, "inclusive_max": True, "field": "metadata.rating"},
                {"start": "2016-12-31", "end": "2018-01-01", "field": "metadata.date"},
            ]
        }
    },
)
print(results[0])
(Document(id='0c9af73370c1483caddf9941440edb50', metadata={'author': 'Jane Doe', 'date': '2017-01-01', 'rating': 3, 'source': 'finance'}, page_content='Stock markets showed mixed results today with tech sector leading gains.'), 0.7275013146103568)
Note The hybrid search results might contain documents that do not satisfy all the search parameters. This is due to the way the scoring is calculated. The score is a sum of both the vector search score and the queries in the hybrid search. If the Vector Search score is high, the combined score will be more than the results that match all the queries in the hybrid search. To avoid such results, please use the filter parameter instead of hybrid search. Combining Hybrid Search Query with Filters Hybrid Search can be combined with filters to get the best of both hybrid search and the filters for results matching the requirements. In this example, we are checking for documents with a rating between 3 & 5 and matching the string “market” in the text field.
filter_text = search.MatchQuery("market", field="text")

query = "Tell me about market updates"
results = vector_store.similarity_search_with_score(
    query,
    search_options={
        "query": {
            "min": 3,
            "max": 5,
            "inclusive_min": True,
            "inclusive_max": True,
            "field": "metadata.rating",
        }
    },
    filter=filter_text,
)

print(results[0])
(Document(id='0c9af73370c1483caddf9941440edb50', metadata={'author': 'Jane Doe', 'date': '2017-01-01', 'rating': 3, 'source': 'finance'}, page_content='Stock markets showed mixed results today with tech sector leading gains.'), 0.4503188681265006)
Other Queries Similarly, you can use any of the supported Query methods like Geo Distance, Polygon Search, Wildcard, Regular Expressions, etc in the search_options parameter. Please refer to the documentation for more details on the available query methods and their syntax.

Usage for retrieval-augmented generation

For guides on how to use these vector stores for retrieval-augmented generation (RAG), see the following sections:

Frequently Asked Questions

Question: Should I create the search index before creating the CouchbaseSearchVectorStore object?

Yes, you need to create the Search index before creating the CouchbaseSearchVectorStore object.

Question: Should I create the index before or after adding documents to CouchbaseQueryVectorStore?

For CouchbaseQueryVectorStore, you should create the index after adding documents using the create_index() method. This is different from CouchbaseSearchVectorStore.

Question: What is the difference between CouchbaseSearchVectorStore and CouchbaseQueryVectorStore?

FeatureCouchbaseSearchVectorStoreCouchbaseQueryVectorStore
Minimum VersionCouchbase Server 7.6+Couchbase Server 8.0+
Index TypeSearch Vector IndexHyperscale or Composite Vector Index
Index CreationBefore vector store creationAfter adding documents
FilteringSearchQuery objectsSQL++ WHERE clauses (where_str)
Best ForHybrid searches (vector + FTS + geo)Large-scale pure vector searches or vector + scalar filters

Question: I am not seeing all the fields that I specified in my search results

In Couchbase, we can only return the fields stored in the Search index. Please ensure that the field that you are trying to access in the search results is part of the Search index. One way to handle this is to index and store a document’s fields dynamically in the index.
  • In Capella, you need to go to “Advanced Mode” then under the chevron “General Settings” you can check “[X] Store Dynamic Fields” or “[X] Index Dynamic Fields”
  • In Couchbase Server, in the Index Editor (not Quick Editor) under the chevron “Advanced” you can check “[X] Store Dynamic Fields” or “[X] Index Dynamic Fields”
Note that these options will increase the size of the index. For more details on dynamic mappings, please refer to the documentation.

Question: I am unable to see the metadata object in my search results

This is most likely due to the metadata field in the document not being indexed and/or stored by the Couchbase Search index. In order to index the metadata field in the document, you need to add it to the index as a child mapping. If you select to map all the fields in the mapping, you will be able to search by all metadata fields. Alternatively, to optimize the index, you can select the specific fields inside metadata object to be indexed. You can refer to the docs to learn more about indexing child mappings. Creating Child Mappings

Question: What is the difference between filter and search_options / hybrid queries?

Filters are pre-filters that are used to restrict the documents searched in a Search index. It is available in Couchbase Server 7.6.4 & higher. Hybrid Queries are additional search queries that can be used to tune the results being returned from the search index. Both filters and hybrid search queries have the same capabilites with slightly different syntax. Filters are SearchQuery objects while the hybrid search queries are dictionaries.

API reference

For detailed documentation of all features and configurations:
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.