vCon Server is a pipeline-based conversation processing and storage system. It ingests vCon (Voice Conversation) records, routes them through configurable processing chains — transcription, AI analysis, tagging, webhooks — and writes results to one or more storage backends.
Full documentation: https://vcon-dev.github.io/vcon-server/
git clone https://github.com/vcon-dev/vcon-server.git
cd vcon-server
cp example_docker-compose.yml docker-compose.yml
cp .env.example .env # edit CONSERVER_API_TOKEN at minimum
docker network create conserver
docker compose up -d --build
curl http://localhost:8000/api/health| Audience | Start here |
|---|---|
| New users | Getting Started |
| Operators / DevOps | Installation · Configuration · Operations |
| Developers | Contributing · Extending · Reference |
- Chain-based processing — compose reusable links into pipelines driven by Redis queues
- 20+ processing links — transcription (Deepgram, Whisper), AI analysis (OpenAI, Groq), tagging, routing, webhooks, compliance (SCITT, DataTrails)
- 10+ storage backends — PostgreSQL, MongoDB, S3, Elasticsearch, Milvus, Redis, SFTP, and more
- Multi-worker scaling — parallel workers with configurable process count and parallel storage writes
- External ingress — scoped API keys let third-party systems submit vCons to specific queues
- OpenTelemetry — built-in tracing and metrics export
docker compose run --rm conserver pytest conserver/links/analyze/tests/ -vSee LICENSE.