Skip to content

vcon-dev/vcon-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

432 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

vCon Server (Conserver)

vCon Server is a pipeline-based conversation processing and storage system. It ingests vCon (Voice Conversation) records, routes them through configurable processing chains — transcription, AI analysis, tagging, webhooks — and writes results to one or more storage backends.

Full documentation: https://vcon-dev.github.io/vcon-server/

Quick Start

git clone https://github.com/vcon-dev/vcon-server.git
cd vcon-server
cp example_docker-compose.yml docker-compose.yml
cp .env.example .env          # edit CONSERVER_API_TOKEN at minimum
docker network create conserver
docker compose up -d --build
curl http://localhost:8000/api/health

Documentation by Audience

Audience Start here
New users Getting Started
Operators / DevOps Installation · Configuration · Operations
Developers Contributing · Extending · Reference

Key Features

  • Chain-based processing — compose reusable links into pipelines driven by Redis queues
  • 20+ processing links — transcription (Deepgram, Whisper), AI analysis (OpenAI, Groq), tagging, routing, webhooks, compliance (SCITT, DataTrails)
  • 10+ storage backends — PostgreSQL, MongoDB, S3, Elasticsearch, Milvus, Redis, SFTP, and more
  • Multi-worker scaling — parallel workers with configurable process count and parallel storage writes
  • External ingress — scoped API keys let third-party systems submit vCons to specific queues
  • OpenTelemetry — built-in tracing and metrics export

Running Tests

docker compose run --rm conserver pytest conserver/links/analyze/tests/ -v

License

See LICENSE.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors