Skip to content

chanakyavasantha/violens-model-backend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Violens Backend β€” AI Violence Detection API

Backend service providing video analysis, live monitoring, and contextual insights for the Violens system. Implements FastAPI endpoints for synchronous and asynchronous processing, in-app metrics, and optional OpenTelemetry export to Azure Monitor.

1. Project Title and Overview

  • Violens Backend β€” FastAPI-based AI service for violence detection.
  • Features: video analysis, SSE streaming for live monitoring, contextual summarization via Gemini, metrics endpoint, Azure Monitor tracing.

2. Repository Contents

  • app.py β€” FastAPI application and endpoints
  • requirements.txt β€” Python dependencies
  • docs/ β€” API requirements and risk matrix
  • notebooks/ β€” Trained model assets and experiments
  • deployment/ β€” Not present; Dockerfile lives at repo root
  • monitoring/ β€” Not present; metrics exposed via API, optional Azure Monitor
  • documentation/ β€” Place optional project proposal/report under docs/
  • videos/ β€” Optional demo clips if you add them
  • .gitignore β€” Ignore cache and artifacts
  • .github/workflows/main_violens-backend.yml β€” CI/CD to Azure Web App

3. System Entry Point

  • Main script: violens_backend/app.py FastAPI app (violens_backend/app.py:30)
  • Run locally:
    • python -m venv .venv && source .venv/bin/activate
    • pip install -r violens_backend/requirements.txt
    • uvicorn app:app --host 0.0.0.0 --port 8000
  • Environment variables:
    • MODEL_PATH β€” Keras model path (default notebooks/best_model.keras) (violens_backend/app.py:110–115)
    • GOOGLE_API_KEY β€” Enables Gemini contextual analysis and File API for video analysis (violens_backend/app.py:175–184)
    • GEMINI_MODEL β€” Preferred Gemini model name (default gemini-2.0-flash-exp) (violens_backend/app.py:149–173)
    • APPLICATIONINSIGHTS_CONNECTION_STRING β€” Azure Monitor tracing (violens_backend/app.py:33–44)
    • ALLOWED_ORIGINS β€” CORS allowed origins (comma-separated) (violens_backend/app.py:372–384)
    • ENABLE_LOCAL_METRICS β€” Toggle local /metrics aggregation (violens_backend/app.py:46–48)

5. Deployment Strategy

  • Azure Web App via GitHub Actions: workflow at violens_backend/.github/workflows/main_violens-backend.yml (violens_backend/.github/workflows/main_violens-backend.yml:5)
    • Triggers: push to main or manual dispatch
    • Build job: sets Python 3.12, optionally installs requirements into antenv, uploads artifact excluding antenv (violens_backend/.github/workflows/main_violens-backend.yml:22–43)
    • Deploy job: logs into Azure with OIDC secrets, deploys using azure/webapps-deploy@v3 to app violens-backend Production slot (violens_backend/.github/workflows/main_violens-backend.yml:64–76)
    • Note: Azure App Service may run Oryx build if SCM_DO_BUILD_DURING_DEPLOYMENT is enabled in app settings
  • Docker (optional, repo root Dockerfile): builds backend+frontend image; exposes 8080
    • Build: docker build -t violens:latest .
    • Run: docker run -p 8080:8080 -e APPLICATIONINSIGHTS_CONNECTION_STRING=... -e GOOGLE_API_KEY=... violens:latest
  • Bare metal/VM: run uvicorn app:app behind Nginx/Apache; configure CORS and TLS

5. Monitoring and Metrics

Metrics System

Local Metrics Collection:

  • Enabled by default (ENABLE_LOCAL_METRICS=1)
  • Tracks last 500 requests in memory
  • Aggregates response times, status codes, and route performance
  • Accessible via /metrics endpoint

Metrics Persistence:

  • Automatic snapshots every 5 minutes
  • Saved to metrics_history.json in backend directory
  • Retains last 7 days of data (2,016 snapshots)
  • Historical data accessible via /metrics/history endpoint

Tracked Metrics:

  • Response Times: p50, p95, p99 percentiles
  • Request Counts: Total and per-route
  • Status Codes: Distribution of 2xx, 4xx, 5xx
  • Live Sessions: Active and total monitoring sessions

Azure Monitor Integration (Optional):

  • Set APPLICATIONINSIGHTS_CONNECTION_STRING for cloud monitoring
  • OpenTelemetry tracing for distributed systems
  • Automatic instrumentation of FastAPI and requests

6. Project Documentation

  • API requirements: violens_backend/docs/api_requirements.md
  • Risk matrix: violens_backend/docs/risk_matrix.svg
  • Proposal/report: add under docs/ when available.

7. Version Control and Team Collaboration

  • Branching: main stable; feature branches for changes.
  • Reviews: PRs require approval and CI passing.
  • Commits: scoped messages; avoid noisy diffs.
  • Secrets: use environment or secret stores; do not commit .env.

API Overview

Core Endpoints

1. /analysis (POST)

Purpose: Analyze uploaded video for violence detection

Request:

curl -X POST http://localhost:8000/analysis \
  -F "video=@path/to/video.mp4" \
  -F "sensitivity=0.7"

Response:

{
  "summary": "Detected 2 segments: class_0(2). Overall risk high. Continue monitoring for escalation.",
  "violenceDetections": [
    {
      "startTime": 10.5,
      "endTime": 12.3,
      "confidence": 0.85,
      "type": "class_0",
      "description": "Detected class_0 with 0.85 confidence"
    }
  ],
  "totalDuration": 60.0,
  "overallRisk": "high",
  "confidence": 0.85,
  "objects": ["person", "chair", "weapon"],
  "emotions": ["aggressive", "angry"],
  "scenes": ["indoor", "crowded"]
}

Features:

  • TensorFlow-based violence detection
  • Gemini File API integration for contextual analysis
  • Returns timeline of detections with confidence scores
  • AI-generated insights (objects, emotions, scenes)

2. /analyze-video (POST)

Purpose: Alternative endpoint for video analysis

Request:

curl -X POST http://localhost:8000/analyze-video \
  -F "file=@video.mp4" \
  -F "sensitivity=0.7" \
  -F "step=5"

Response: Same format as /analysis

3. /monitor (GET - Server-Sent Events)

Purpose: Real-time live monitoring via webcam stream

Usage:

const eventSource = new EventSource('http://localhost:8000/monitor?session_id=abc123');
eventSource.onmessage = (event) => {
  const data = JSON.parse(event.data);
  console.log('Detection:', data);
};

Response Stream:

{
  "timestamp": "2024-01-15T10:30:00Z",
  "confidence": 0.82,
  "type": "class_0",
  "description": "Detected class_0 with 0.82 confidence",
  "session_id": "abc123"
}

4. /metrics (GET)

Purpose: Get current deployment metrics

Response:

{
  "totals": {
    "count": 150,
    "p50_ms": 245.3,
    "p95_ms": 1250.8,
    "p99_ms": 2340.5,
    "statusCounts": {
      "200": 145,
      "500": 5
    }
  },
  "routes": {
    "/analysis": {
      "count": 50,
      "p50_ms": 1200.5,
      "p95_ms": 2500.3,
      "p99_ms": 3200.1,
      "statusCounts": {"200": 48, "500": 2}
    }
  },
  "liveSessions": {
    "active": 2,
    "total": 5
  }
}

Features:

  • Real-time performance monitoring
  • Response time percentiles (p50, p95, p99)
  • Status code distribution
  • Per-route metrics
  • Live session tracking

5. /metrics/history (GET)

Purpose: Get historical metrics data

Parameters:

  • hours (optional): Time range in hours (default: 24)

Request:

curl http://localhost:8000/metrics/history?hours=24

Response:

{
  "history": [
    {
      "timestamp": "2024-01-15T10:00:00Z",
      "totals": {
        "count": 100,
        "p50_ms": 230.5,
        "p95_ms": 1100.2,
        "p99_ms": 2100.8,
        "statusCounts": {"200": 95, "500": 5}
      },
      "routes": {...},
      "liveSessions": {"active": 1, "total": 3}
    }
  ],
  "count": 288
}

Features:

  • Historical data for trend analysis
  • Saved every 5 minutes to metrics_history.json
  • Keeps last 7 days of data
  • Supports time range filtering (1h, 24h, 7d)

Response Shapes

  • AnalysisData includes summary, violenceDetections[], totalDuration, overallRisk, confidence, objects[], emotions[], scenes[]
  • Live events: prediction/analysis messages via SSE with frame, class, confidence, analysis text

Gemini Video Analysis Integration

Overview

The system uses Gemini File API to analyze entire videos for contextual AI insights. This approach is simpler and more accurate than frame-by-frame analysis.

How It Works

Step 1: Video Upload

  • After violence detection analysis completes, the entire video file is uploaded to Gemini File API
  • Gemini processes the video and extracts visual information

Step 2: AI Analysis

  • A comprehensive prompt is sent to Gemini along with the uploaded video
  • Gemini analyzes the entire video context to identify:
    • Objects: Physical items visible (weapons, furniture, vehicles, etc.)
    • Emotions: Emotional states of people (angry, fearful, aggressive, calm, etc.)
    • Scenes: Context and settings (indoor, outdoor, crowded, isolated, etc.)

Step 3: Response Parsing

  • Gemini returns structured analysis in the format:
    OBJECTS: item1, item2, item3
    EMOTIONS: emotion1, emotion2, emotion3
    SCENES: scene1, scene2, scene3
    
  • Results are parsed and deduplicated
  • Limited to top 15 objects, 10 emotions, 10 scenes

Step 4: Cleanup

  • Uploaded video file is deleted from Gemini servers
  • AI insights are returned with the analysis response

Configuration

# .env file
GOOGLE_API_KEY=your_gemini_api_key
GEMINI_MODEL=gemini-2.0-flash-exp  # or gemini-1.5-flash-latest

Example Response

{
  "summary": "Detected 2 segments: class_0(2). Overall risk high. Continue monitoring for escalation.",
  "violenceDetections": [
    {"startTime": 10.5, "endTime": 12.3, "confidence": 0.85, "type": "class_0", "description": "..."}
  ],
  "totalDuration": 60.0,
  "overallRisk": "high",
  "confidence": 0.85,
  "objects": ["person", "chair", "table", "door"],
  "emotions": ["aggressive", "angry", "fearful"],
  "scenes": ["indoor", "crowded", "public space"]
}

Performance Considerations

  • API Costs: 1 File API upload + 1 generation call per video analysis
  • Processing Time: ~5-15 seconds depending on video length and Gemini API response time
  • File Size Limits: Gemini File API supports videos up to 2GB
  • Optimization: Analysis runs asynchronously and won't block the main detection pipeline

Development

  • Create and activate a virtual environment.
  • Install packages: pip install -r violens_backend/requirements.txt
  • Start server: uvicorn app:app --host 0.0.0.0 --port 8000

Repository Structure (current)

πŸ“ violens_backend
β”‚
β”œβ”€β”€ πŸ“ docs
β”‚   β”œβ”€β”€ api_requirements.md
β”‚   └── risk_matrix.svg
β”œβ”€β”€ πŸ“ notebooks
β”‚   β”œβ”€β”€ best_model.keras
β”‚   └── mobilenet.ipynb
β”œβ”€β”€ app.py
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ .gitignore
└── Readme.md

Violens Backend powers reliable analysis and live monitoring for high-risk contexts. Contributions and improvements are welcome.

About

Violence Detection Code Repository

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors