Managed Mode
In managed mode, Embedd.to handles vector storage for you using its built-in Qdrant vector database.
How It Works
- You create a connection to your source database
- You register an embedding provider (OpenAI, Gemini)
- You create a vector table — Embedd.to creates a Qdrant collection automatically
- On backfill, source data is read, embedded via your embedding provider, and stored in Qdrant
- Sync keeps vectors up to date as source data changes
When to Use Managed Mode
- Getting started — Fastest way to add semantic search
- Multi-provider search — Query vectors from multiple database providers through one API
- No infrastructure changes — No need to install pgvector or manage vector indexes
Requirements
- A source database connection (Snowflake or PostgreSQL)
- An embedding provider with a valid API key
Architecture
Source DB → Embedd.to → Embedding Provider (OpenAI/Gemini)
↓
Qdrant (managed by Embedd.to)
↓
Query API
Filter Support
Managed mode uses Qdrant's native filtering with full support for all filter operators ($eq, $ne, $gt, $gte, $lt, $lte, $in, $nin, $exists). Filters are translated to Qdrant's FieldCondition format automatically.
Limitations
- Vectors are stored in Embedd.to's infrastructure, not yours
- Query latency depends on Qdrant instance proximity to your application