-

Elastic Introduces New Vector Storage Format DiskBBQ for More Efficient Vector Search

New alternative to HNSW brings faster, more cost-effective search

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced DiskBBQ, a new disk-friendly vector search algorithm in Elasticsearch that delivers more efficient vector search at scale than traditional industry-standard search techniques used in many vector databases. DiskBBQ eliminates the need to keep entire vector indexes in memory, delivers predictable performance, and costs less.

Hierarchical Navigable Small Worlds (HNSW) is the most commonly used search technique in vector databases because of its speed and accuracy in similarity search. However, it requires all vectors to reside in memory, which can be costly at large scale. DiskBBQ, available now in Elasticsearch 9.2, uses BBQ (Better Binary Quantization) to address this by compressing vectors efficiently and clustering them into compact partitions for selective disk reads. This reduces RAM usage, avoids spikes in data retrieval time, and improves system performance for data ingestion and organization.

“As AI applications scale, traditional vector storage formats force them to choose between slow indexing or significant infrastructure costs required to overcome memory limitations,” said Ajay Nair, general manager, Platform at Elastic. “DiskBBQ is a smarter, more scalable approach to high-performance vector search on very large datasets that accelerates both indexing and retrieval.”

In benchmark testing, DiskBBQ demonstrated a balance of speed, stability and efficiency that is ideal for large-scale vector search on lower-cost memory infrastructure and object storage. As a disk-friendly ANN algorithm, it requires far less memory than HNSW, which keeps the entire graph in RAM by offloading data to disk and reading only relevant vector clusters at query time. This design removes memory as a limiting factor, enabling Elasticsearch to scale to massive datasets limited only by CPU and disk.

DiskBBQ sustained query latencies of roughly 15 milliseconds while operating in as little as 100 MB of total memory, where traditional HNSW indexing could not run. As available memory increased, DiskBBQ’s performance scaled smoothly without the sharp latency cliffs typical of in-memory graph approaches.

To learn more about DiskBBQ, read the Elastic blog.

Availability

DiskBBQ is available in technical preview in Elasticsearch Serverless.

About Elastic

Elastic (NYSE: ESTC), the Search AI Company, integrates its deep expertise in search technology with artificial intelligence to help everyone transform all of their data into answers, actions, and outcomes. Elastic's Search AI Platform — the foundation for its search, observability, and security solutions — is used by thousands of companies, including more than 50% of the Fortune 500. Learn more at elastic.co.

Elastic and associated marks are trademarks or registered trademarks of Elasticsearch BV and its subsidiaries. All other company and product names may be trademarks of their respective owners.

Contacts

Media Contact
Elastic PR
PR-team@elastic.co

Elastic N.V.

NYSE:ESTC

Release Versions

Contacts

Media Contact
Elastic PR
PR-team@elastic.co

More News From Elastic N.V.

Elastic to Announce Second Quarter Fiscal 2026 Earnings Results on Thursday, November 20, 2025

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced that it will release its financial results for its second quarter fiscal 2026 ended October 31, 2025, after the U.S. market close on Thursday, November 20, 2025. The company will host a conference call at 2:00 p.m. PT / 5:00 p.m. ET that day to review its financial results and business outlook. A live webcast of the conference call will be accessible from the Elastic investor relations website at ir.elastic.c...

Elastic Brings LLM Observability to Azure AI Foundry to Optimize AI Agents

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, today announced a new integration with Azure AI Foundry, delivering observability for agentic AI applications and large language models (LLMs). The integration provides site reliability engineers (SREs) and developers with real-time insights into LLMs, generative AI and agentic AI workloads, enabling them to build, monitor, and optimize intelligent agents on Azure AI Foundry with greater reliability and efficiency whil...

Elastic Redefines Observability with AI-Powered Streams

SAN FRANCISCO--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the Search AI Company, announced Streams, an agentic AI-powered solution that rethinks how teams work with logs to enable much faster incident investigation and resolution. Streams uses AI to automatically partition and parse raw logs to extract relevant fields, greatly reducing the effort required of Site Reliability Engineers (SREs) to make logs usable. Streams also automatically surfaces significant events such as critical errors and anom...
Back to Newsroom