Backend service providing video analysis, live monitoring, and contextual insights for the Violens system. Implements FastAPI endpoints for synchronous and asynchronous processing, in-app metrics, and optional OpenTelemetry export to Azure Monitor.
- Violens Backend β FastAPI-based AI service for violence detection.
- Features: video analysis, SSE streaming for live monitoring, contextual summarization via Gemini, metrics endpoint, Azure Monitor tracing.
app.pyβ FastAPI application and endpointsrequirements.txtβ Python dependenciesdocs/β API requirements and risk matrixnotebooks/β Trained model assets and experimentsdeployment/β Not present; Dockerfile lives at repo rootmonitoring/β Not present; metrics exposed via API, optional Azure Monitordocumentation/β Place optional project proposal/report underdocs/videos/β Optional demo clips if you add them.gitignoreβ Ignore cache and artifacts.github/workflows/main_violens-backend.ymlβ CI/CD to Azure Web App
- Main script:
violens_backend/app.pyFastAPI app (violens_backend/app.py:30) - Run locally:
python -m venv .venv && source .venv/bin/activatepip install -r violens_backend/requirements.txtuvicorn app:app --host 0.0.0.0 --port 8000
- Environment variables:
MODEL_PATHβ Keras model path (defaultnotebooks/best_model.keras) (violens_backend/app.py:110β115)GOOGLE_API_KEYβ Enables Gemini contextual analysis and File API for video analysis (violens_backend/app.py:175β184)GEMINI_MODELβ Preferred Gemini model name (defaultgemini-2.0-flash-exp) (violens_backend/app.py:149β173)APPLICATIONINSIGHTS_CONNECTION_STRINGβ Azure Monitor tracing (violens_backend/app.py:33β44)ALLOWED_ORIGINSβ CORS allowed origins (comma-separated) (violens_backend/app.py:372β384)ENABLE_LOCAL_METRICSβ Toggle local/metricsaggregation (violens_backend/app.py:46β48)
- Azure Web App via GitHub Actions: workflow at
violens_backend/.github/workflows/main_violens-backend.yml(violens_backend/.github/workflows/main_violens-backend.yml:5)- Triggers: push to
mainor manual dispatch - Build job: sets Python 3.12, optionally installs requirements into
antenv, uploads artifact excludingantenv(violens_backend/.github/workflows/main_violens-backend.yml:22β43) - Deploy job: logs into Azure with OIDC secrets, deploys using
azure/webapps-deploy@v3to appviolens-backendProduction slot (violens_backend/.github/workflows/main_violens-backend.yml:64β76) - Note: Azure App Service may run Oryx build if
SCM_DO_BUILD_DURING_DEPLOYMENTis enabled in app settings
- Triggers: push to
- Docker (optional, repo root
Dockerfile): builds backend+frontend image; exposes8080- Build:
docker build -t violens:latest . - Run:
docker run -p 8080:8080 -e APPLICATIONINSIGHTS_CONNECTION_STRING=... -e GOOGLE_API_KEY=... violens:latest
- Build:
- Bare metal/VM: run
uvicorn app:appbehind Nginx/Apache; configure CORS and TLS
Local Metrics Collection:
- Enabled by default (
ENABLE_LOCAL_METRICS=1) - Tracks last 500 requests in memory
- Aggregates response times, status codes, and route performance
- Accessible via
/metricsendpoint
Metrics Persistence:
- Automatic snapshots every 5 minutes
- Saved to
metrics_history.jsonin backend directory - Retains last 7 days of data (2,016 snapshots)
- Historical data accessible via
/metrics/historyendpoint
Tracked Metrics:
- Response Times: p50, p95, p99 percentiles
- Request Counts: Total and per-route
- Status Codes: Distribution of 2xx, 4xx, 5xx
- Live Sessions: Active and total monitoring sessions
Azure Monitor Integration (Optional):
- Set
APPLICATIONINSIGHTS_CONNECTION_STRINGfor cloud monitoring - OpenTelemetry tracing for distributed systems
- Automatic instrumentation of FastAPI and requests
- API requirements:
violens_backend/docs/api_requirements.md - Risk matrix:
violens_backend/docs/risk_matrix.svg - Proposal/report: add under
docs/when available.
- Branching:
mainstable; feature branches for changes. - Reviews: PRs require approval and CI passing.
- Commits: scoped messages; avoid noisy diffs.
- Secrets: use environment or secret stores; do not commit
.env.
Purpose: Analyze uploaded video for violence detection
Request:
curl -X POST http://localhost:8000/analysis \
-F "video=@path/to/video.mp4" \
-F "sensitivity=0.7"Response:
{
"summary": "Detected 2 segments: class_0(2). Overall risk high. Continue monitoring for escalation.",
"violenceDetections": [
{
"startTime": 10.5,
"endTime": 12.3,
"confidence": 0.85,
"type": "class_0",
"description": "Detected class_0 with 0.85 confidence"
}
],
"totalDuration": 60.0,
"overallRisk": "high",
"confidence": 0.85,
"objects": ["person", "chair", "weapon"],
"emotions": ["aggressive", "angry"],
"scenes": ["indoor", "crowded"]
}Features:
- TensorFlow-based violence detection
- Gemini File API integration for contextual analysis
- Returns timeline of detections with confidence scores
- AI-generated insights (objects, emotions, scenes)
Purpose: Alternative endpoint for video analysis
Request:
curl -X POST http://localhost:8000/analyze-video \
-F "file=@video.mp4" \
-F "sensitivity=0.7" \
-F "step=5"Response: Same format as /analysis
Purpose: Real-time live monitoring via webcam stream
Usage:
const eventSource = new EventSource('http://localhost:8000/monitor?session_id=abc123');
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log('Detection:', data);
};Response Stream:
{
"timestamp": "2024-01-15T10:30:00Z",
"confidence": 0.82,
"type": "class_0",
"description": "Detected class_0 with 0.82 confidence",
"session_id": "abc123"
}Purpose: Get current deployment metrics
Response:
{
"totals": {
"count": 150,
"p50_ms": 245.3,
"p95_ms": 1250.8,
"p99_ms": 2340.5,
"statusCounts": {
"200": 145,
"500": 5
}
},
"routes": {
"/analysis": {
"count": 50,
"p50_ms": 1200.5,
"p95_ms": 2500.3,
"p99_ms": 3200.1,
"statusCounts": {"200": 48, "500": 2}
}
},
"liveSessions": {
"active": 2,
"total": 5
}
}Features:
- Real-time performance monitoring
- Response time percentiles (p50, p95, p99)
- Status code distribution
- Per-route metrics
- Live session tracking
Purpose: Get historical metrics data
Parameters:
hours(optional): Time range in hours (default: 24)
Request:
curl http://localhost:8000/metrics/history?hours=24Response:
{
"history": [
{
"timestamp": "2024-01-15T10:00:00Z",
"totals": {
"count": 100,
"p50_ms": 230.5,
"p95_ms": 1100.2,
"p99_ms": 2100.8,
"statusCounts": {"200": 95, "500": 5}
},
"routes": {...},
"liveSessions": {"active": 1, "total": 3}
}
],
"count": 288
}Features:
- Historical data for trend analysis
- Saved every 5 minutes to
metrics_history.json - Keeps last 7 days of data
- Supports time range filtering (1h, 24h, 7d)
AnalysisDataincludessummary,violenceDetections[],totalDuration,overallRisk,confidence,objects[],emotions[],scenes[]- Live events: prediction/analysis messages via SSE with
frame,class,confidence,analysistext
The system uses Gemini File API to analyze entire videos for contextual AI insights. This approach is simpler and more accurate than frame-by-frame analysis.
Step 1: Video Upload
- After violence detection analysis completes, the entire video file is uploaded to Gemini File API
- Gemini processes the video and extracts visual information
Step 2: AI Analysis
- A comprehensive prompt is sent to Gemini along with the uploaded video
- Gemini analyzes the entire video context to identify:
- Objects: Physical items visible (weapons, furniture, vehicles, etc.)
- Emotions: Emotional states of people (angry, fearful, aggressive, calm, etc.)
- Scenes: Context and settings (indoor, outdoor, crowded, isolated, etc.)
Step 3: Response Parsing
- Gemini returns structured analysis in the format:
OBJECTS: item1, item2, item3 EMOTIONS: emotion1, emotion2, emotion3 SCENES: scene1, scene2, scene3 - Results are parsed and deduplicated
- Limited to top 15 objects, 10 emotions, 10 scenes
Step 4: Cleanup
- Uploaded video file is deleted from Gemini servers
- AI insights are returned with the analysis response
# .env file
GOOGLE_API_KEY=your_gemini_api_key
GEMINI_MODEL=gemini-2.0-flash-exp # or gemini-1.5-flash-latest{
"summary": "Detected 2 segments: class_0(2). Overall risk high. Continue monitoring for escalation.",
"violenceDetections": [
{"startTime": 10.5, "endTime": 12.3, "confidence": 0.85, "type": "class_0", "description": "..."}
],
"totalDuration": 60.0,
"overallRisk": "high",
"confidence": 0.85,
"objects": ["person", "chair", "table", "door"],
"emotions": ["aggressive", "angry", "fearful"],
"scenes": ["indoor", "crowded", "public space"]
}- API Costs: 1 File API upload + 1 generation call per video analysis
- Processing Time: ~5-15 seconds depending on video length and Gemini API response time
- File Size Limits: Gemini File API supports videos up to 2GB
- Optimization: Analysis runs asynchronously and won't block the main detection pipeline
- Create and activate a virtual environment.
- Install packages:
pip install -r violens_backend/requirements.txt - Start server:
uvicorn app:app --host 0.0.0.0 --port 8000
π violens_backend
β
βββ π docs
β βββ api_requirements.md
β βββ risk_matrix.svg
βββ π notebooks
β βββ best_model.keras
β βββ mobilenet.ipynb
βββ app.py
βββ requirements.txt
βββ .gitignore
βββ Readme.md
Violens Backend powers reliable analysis and live monitoring for high-risk contexts. Contributions and improvements are welcome.