Supported Embedding Providers & Models

Qdrant supports all available text and multimodal dense vector embedding models as well as vector embedding services without any limitations.

Some of the Embeddings you can use with Qdrant:

SentenceTransformers, BERT, SBERT, Clip, OpenClip, Open AI, Vertex AI, Azure AI, AWS Bedrock, Jina AI, Upstage AI, Mistral AI, Cohere AI, Voyage AI, Aleph Alpha, Baidu Qianfan, BGE, Instruct, Watsonx Embeddings, Snowflake Embeddings, NVIDIA NeMo, Nomic, OCI Embeddings, Ollama Embeddings, MixedBread, Together AI, Clarifai, Databricks Embeddings, GPT4All Embeddings, John Snow Labs Embeddings.

Additionally, any open-source embeddings from HuggingFace can be used with Qdrant.

Code samples:

Embeddings ProvidersDescription
Aleph AlphaMultilingual embeddings focused on European languages.
BedrockAWS managed service for foundation models and embeddings.
CohereLanguage model embeddings for NLP tasks.
GeminiGoogle’s multimodal embeddings for text and vision.
Jina AICustomizable embeddings for neural search.
MistralOpen-source, efficient language model embeddings.
MixedBreadLightweight embeddings for constrained environments.
MixpeekManaged SDK for video chunking, embedding, and post-processing.​
NomicEmbeddings for data visualization.
NvidiaGPU-optimized embeddings from Nvidia.
OllamaEmbeddings for conversational AI.
OpenAIIndustry-leading embeddings for NLP.
Prem AIPrecise language embeddings.
SnowflakeScalable embeddings for big data.
UpstageEmbeddings for speech and language tasks.
Voyage AINavigation and spatial understanding embeddings.
Was this page useful?

Thank you for your feedback! 🙏

We are sorry to hear that. 😔 You can edit this page on GitHub, or create a GitHub issue.