showPerformanceDetails parameter returns per-stage timing information so you can pinpoint bottlenecks without guesswork.
How it works
SetshowPerformanceDetails to true in any search request. Meilisearch will include a performanceDetails object in the response, breaking down how much time each stage of the search pipeline consumed.
This parameter is supported on all search routes:
POST /indexes/{indexUid}/searchGET /indexes/{indexUid}/searchPOST /multi-searchPOST /indexes/{indexUid}/similarGET /indexes/{indexUid}/similar
Basic usage
AddshowPerformanceDetails to a standard search request:
performanceDetails object:
Understanding performance stages
Each key inperformanceDetails represents a stage of the search pipeline. Stage names are hierarchical, using > as a separator (e.g., search > keyword ranking).
Top-level stages
| Stage | Description |
|---|---|
wait in queue | Time waiting in search queue. Meilisearch limits concurrent searches, so a high value here means your instance is handling too many simultaneous queries. |
search | Total time for the entire search operation, including all sub-stages below. |
similar | Total time for a similar documents request (instead of search). |
Search sub-stages
These appear as children of thesearch stage. Not all stages appear in every query; Meilisearch only reports stages that were actually executed.
| Stage | Description |
|---|---|
search > tokenize query | Breaking the query string into individual tokens. Typically very fast unless the query is unusually long. |
search > embed query | Generating vector embeddings for the query. Only appears when using hybrid or semantic search. Duration depends on your embedder provider and network latency. |
search > evaluate filter | Evaluating filter expressions to narrow the candidate set. Complex filters or many filterable attributes increase this time. |
search > evaluate query | Retrieving the set of documents matching the query. This combines filter results with the full document set to establish which documents are eligible for ranking. |
search > keyword ranking | Ranking candidates using the keyword ranking rules. Often the most significant stage for broad queries on large datasets. |
search > placeholder ranking | Ranking candidates using the sort and the custom ranking rules (placeholder search). Appears instead of keyword ranking when q is empty or missing. |
search > semantic ranking | Ranking candidates based on the vector similarity with the embedding. Only appears when using hybrid or semantic search. |
search > personalization | Applying search personalization to re-rank results based on user context. Only appears when personalization is configured. |
search > facet distribution | Computing facet value counts for the facets parameter. Cost scales with the number of faceted attributes and unique values. See maxValuesPerFacet. |
search > format | Formatting results: highlighting, cropping, building the response payload. Cost scales with the number of attributes to highlight/crop and the size of document fields. |
Federated search stages
When usingshowPerformanceDetails at the federation level, you see these stages instead:
| Stage | Description |
|---|---|
federating results > partition queries | Organizing queries by index and remote host. |
federating results > start remote search | Initiating search requests to remote Meilisearch instances. Only appears when using network search. |
federating results > execute local search | Executing queries against local indexes. |
federating results > wait for remote results | Waiting for remote instances to respond. High values indicate network latency or slow remote instances. |
federating results > merge results | Merging and deduplicating results from all sources into a single ranked list. |
federating results > hydrate documents | Fetching full document data, including linked index joins. |
federating results > merge facets | Combining facet distributions from all sources. |
Multiple occurrences of the same stage (e.g., multiple
search > keyword ranking in a federated query) are automatically accumulated into a single total duration.Multi-search
In multi-search requests, setshowPerformanceDetails on each individual query that you want to profile:
performanceDetails, letting you compare timing across indexes and queries.
Federated search
For federated multi-search, setshowPerformanceDetails in the federation object to get timing details for the combined search:
Similar documents
The similar documents endpoint also supportsshowPerformanceDetails:
Practical tips
Identify the bottleneck
Look for the stage with the highest duration. Common patterns:- High
wait in queue: your instance is overloaded with concurrent searches. Scale your hardware or reduce query volume. - High
search > evaluate filter: complex filters expressions or too many filterable attributes. Use granular filterable attributes to disable unused filter features. - High
search > evaluate query: complex query containing a lot of words or matching a lot of synonyms, generating a complex query tree that is expensive to evaluate. Add stop words, reduce synonyms cardinality. - High
search > keyword ranking: the query necessitates a lot of iterations in the ranking rules to retrieve the requested amount of documents, reduce the offset and limit parameters, limit searchable attributes, or lowermaxTotalHits. - High
search > embed query: your embedder is slow. Consider switching to a faster model, using a local embedder for search with composite embedders, or caching embeddings. - High
search > facet distribution: too many faceted attributes or highmaxValuesPerFacet. Lower it to the number of facet values you actually display. - High
search > format: largeattributesToRetrieve,attributesToHighlight, orattributesToCrop. Reduce to only the fields your UI needs. - High
federating results > wait for remote results: network latency to remote instances. Check network connectivity or colocate instances.
Compare before and after
UseshowPerformanceDetails before and after configuration changes (adding stop words, adjusting searchable attributes, modifying the search cutoff) to measure the impact of each optimization.
Disable in production
Collecting performance details adds a small amount of overhead to each search request. Use this parameter for debugging and profiling, then remove it from production queries.Performance tuning
Optimize search speed and relevancy for large datasets
Ranking pipeline
Understand how Meilisearch ranks search results
Configure search cutoff
Set time limits to guarantee consistent response times
Search API reference
Full API reference for the search endpoint