Self-hosted photo and video gallery with AI-powered search, grouping, and tagging
Upload photos and videos, organize them into folders and use integrated AI tools for search and sorting — all running on your own hardware. No cloud, no API keys, fully private. High performance, low resource usage, and only a single container without any other dependencies.
|
|
|
|
- Visual Search — Find similar photos and videos by uploading a reference image, with adjustable similarity threshold and one-click grouping
- People View & Setup Wizard — Automatically detect faces, group them into people clusters, auto-classify additional matches, review merge suggestions, and bootstrap naming with a guided setup wizard
- Auto Tagging — Tag a few items in the library and let the AI automatically label matching items across your library
- Duplicate Detection — Duplicates are detected during upload and silently skipped
- Virtual Folders — Organize media into folders without moving files; one item can live in multiple folders with drag-and-drop support
- Favorites — Mark items as favorites for quick access in a dedicated view
- Multi-Select & Batch Operations — Marquee selection, shift-click, batch download (auto-split zip), batch delete-to-trash, restore, permanent delete, and batch add-to-folder
- Trash Bin — Deleted items move to Trash first, stay hidden from the active library, can be restored at any time, and are permanently removed after 30 days or when you empty Trash
- Smooth Media Loading — Skeleton placeholders and thumbnail posters keep grids, face crops, and modals visually stable while images and videos load
- Video Support — Upload any common video format with automatic frame extraction for thumbnails and AI features. Uses a Zero-RAM streaming architecture allowing massive video uploads even on 1GB RAM containers.
- EXIF Metadata — View camera details, date, GPS, exposure, and more
- Deep Linking — Bookmarkable URLs for folders, favorites, search states, and individual items
- Password Protection — Optional single-password auth with rate limiting and secure sessions. No complex user management, only a simple password.
- Secure Share Links — Create share links for folders or selections. Public links are accessible without login but strictly scoped to shared items. Direct original-file routes and archive download planning are disabled by default (
allow_download=false) and can be enabled per share. - Responsive UI — Optimized for mobile with native touch gestures (long-press selection, swipe-to-close), ergonomic bottom-weighted navigation, and persistent viewing preferences such as the thumbnail resizer (S/M/L) and year-label visibility.
- Drag-and-Drop Upload — Drag files anywhere into the browser window to upload. Context-aware: dropping into a virtual folder automatically adds the files to that folder.
- Real-time Sync — WebSocket-powered instant updates across all browser clients; all users can see new uploads, favorite toggles, and folder changes immediately as they happen
- Self-Healing — Automatically detects and repairs missing thumbnails or metadata in the background
- 100% Self-Hosted — No cloud, no telemetry. Your data stays yours.
| Feature | Technology |
|---|---|
| Visual Search | MobileNetV3-Large extracts 1280-dim feature vectors; cosine similarity via sqlite-vec |
| Similarity Grouping | Mutual-KNN graph clustering over the same embedding space, resolved with Union-Find and a user-adjustable similarity threshold |
| Face Detection | SCRFD-500M detects faces and 5 facial landmarks for precise alignment |
| Face Clustering | ArcFace (MobileFaceNet) extracts 512-dim embeddings from landmark-aligned 112×112 crops; initial grouping via SNN (Shared Nearest Neighbor) clustering with Jaccard Similarity and Chinese Whispers. SNN has proven to be the most accurate method. Min face size 20×20 px. |
| Face Classification | Two-pass pipeline. KNN (primary): L2-normalised centroid per person computed from assigned exemplars with outlier trimming (cosine similarity threshold 0.38, margin 0.10 between top-2). SVM fallback: Platt-calibrated linear SVM per person trained via linfa-svm (training requires ≥ 3 positive exemplars per person; fallback runs when at least 2 models exist; probability threshold 0.55, margin 0.10). Manually removed faces are permanently excluded from re-assignment via the face_exclusions table. |
| Auto-Tagging | Linear SVM with Platt-calibrated probabilities trained on user-provided examples via linfa-svm |
| Duplicate Detection | Perceptual hashing (image_hasher) compared at upload time |
| Video Processing | ffmpeg thumbnail filter selects visually distinct frames for thumbnails, hashing, and embeddings |
| AI Inference | ort (ONNX Runtime) for fast CPU-based model execution |
| Batch Downloads | Real-time ZIP streaming via async_zip with automatic partitioning into ~2 GB parts |
| Authentication | Optional shared password with constant-time verification, rate-limited login, and secure HTTP-only server-tracked sessions |
The easiest way to run GalleryNet is with Docker (the password parameter is optional):
docker run -d \
--name gallerynet \
-p 3000:3000 \
-v gallerynet-data:/app/data \
-e GALLERY_PASSWORD=your-secret-password \
sedrad/gallerynetservices:
gallerynet:
image: sedrad/gallerynet
container_name: gallerynet
ports:
- "3000:3000"
environment:
- GALLERY_PASSWORD=your-secret-password
volumes:
- ./data:/app/data
restart: unless-stoppeddocker compose up -d| Variable | Default | Description |
|---|---|---|
DATABASE_PATH |
gallery.db |
Path to the SQLite database file |
UPLOAD_DIR |
uploads |
Directory for original uploaded files |
THUMBNAIL_DIR |
thumbnails |
Directory for generated thumbnails |
MODEL_PATH |
assets/models/mobilenetv3.onnx |
Path to the ONNX model file |
GALLERY_PASSWORD |
(empty) | Set to enable password authentication. Leave empty for no auth |
CORS_ORIGIN |
(empty) | Set to allow cross-origin requests from a specific origin (e.g. https://example.com). Unset = same-origin only |
The Docker image sets DATABASE_PATH=/app/data/gallery.db, UPLOAD_DIR=/app/data/uploads, and THUMBNAIL_DIR=/app/data/thumbnails.
Deleting media is a soft delete. GalleryNet marks the item as trashed in the database and keeps the original file, thumbnail, and video frame sidecar in place until the item is permanently removed.
Trashed items are hidden from the active library, folders, favorites, shares, and direct /uploads or /thumbnails access. They remain available only through the Trash view and the dedicated trash API endpoints. Items are permanently deleted after 30 days, when you empty Trash, or when you delete individual items forever from Trash.
- Rust — Latest stable toolchain
- Node.js — v18+
- ffmpeg — Must be on PATH for video support
# Clone the repository
git clone https://github.com/srad/GalleryNet.git
cd GalleryNet
# Build the frontend
cd frontend && npm install && npm run build && cd ..
# Run the server
cargo run --releaseThe server starts on http://localhost:3000.
# Backend (auto-reload with cargo-watch)
cargo watch -x run
# Frontend (Vite dev server with HMR, proxies /api to :3000)
cd frontend && npm run devGalleryNet follows Hexagonal Architecture with clean separation of concerns:
src/
├── domain/ # Models & trait ports (zero dependencies)
├── application/ # Use cases (upload, search, list, delete) and background tasks
├── infrastructure/ # SQLite, ONNX Runtime, perceptual hashing
├── presentation/ # Axum HTTP handlers & auth middleware
└── main.rs # Wiring & server startup
| Layer | Technology |
|---|---|
| Backend | Rust, Axum, Tokio, WebSockets |
| Database | SQLite + sqlite-vec |
| AI/ML | ort (ONNX Runtime), MobileNetV3-Large, linfa-svm (tag learning) |
| Frontend | React 19, TypeScript, Tailwind CSS v4, Vite |
| Video | ffmpeg (frame extraction) |
| Hashing | image_hasher (perceptual hashing) |
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/upload |
Upload media (multipart). Returns MediaItem. 409 for duplicates |
POST |
/api/search |
Visual similarity search. Multipart with file + similarity |
POST |
/api/media/group |
Group media by visual similarity. Body: {"similarity": 80, "folder_id": "..."} (folder optional) |
POST |
/api/media/faces/group |
Group all media by detected faces. Body: {"similarity": 60} |
POST |
/api/media/faces/search |
Search for similar faces by face ID. Body: {"face_id": "...", "similarity": 60} |
POST |
/api/people |
Create a named person. Body: {"name": "..."} |
GET |
/api/people |
List all identified people with representative faces |
GET |
/api/people/stats |
Face and person statistics (total faces, unassigned, etc.) |
POST |
/api/people/classify |
Run the face auto-classification pipeline on currently unassigned faces |
POST |
/api/people/cleanup |
Delete orphaned people records with no remaining faces |
POST |
/api/people/reset |
Delete face data and trigger a full re-scan |
POST |
/api/people/scan/cancel |
Cancel the running background face scan (also aborts clustering if in progress). Scan auto-resumes after 5 minutes. Returns 204. |
GET |
/api/people/{id} |
Get person details |
PUT |
/api/people/{id} |
Update person (name, hidden, representative face) |
DELETE |
/api/people/{id} |
Delete person profile |
POST |
/api/people/{id}/merge |
Merge person into another. Body: {"target_id": "..."} |
POST |
/api/people/{id}/faces/{face_id}/confirm |
Promote an auto-clustered face to manually confirmed. Returns 204. |
POST |
/api/people/{id}/faces/confirm-all |
Bulk-promote all auto-clustered exemplars for a person to confirmed. Returns { confirmed: N }. |
GET |
/api/people/suggested-merges |
Top-30 person pairs by centroid similarity above threshold. Query: ?threshold=60. |
GET |
/api/people/{id}/media |
List all media containing this person |
GET |
/api/people/{id}/faces |
List all face crops assigned to this person |
POST |
/api/people/{id}/faces/{face_id}/assign |
Assign a face to this person. Returns 204. |
POST |
/api/people/{id}/faces/{face_id}/unassign |
Unassign a face from this person. Returns 204. |
DELETE |
/api/people/{id}/faces |
Unassign all faces from this person while keeping the person record |
GET |
/api/faces/unassigned |
List unassigned face crops for review/setup workflows |
GET |
/api/media |
Paginated media list. Params: page, limit, media_type, sort, sort_by, favorite, tags |
GET |
/api/media/{id} |
Get single media item with EXIF data |
POST |
/api/media/{id}/favorite |
Toggle favorite status. Body: {"favorite": true/false} |
DELETE |
/api/media/{id} |
Move a single media item to Trash |
POST |
/api/media/batch-delete |
Move media items to Trash. Body: ["uuid1", ...] |
GET |
/api/trash/media |
Paginated trash listing |
GET |
/api/trash/media/{id} |
Get a single trashed media item |
GET |
/api/trash/media/{id}/view |
Stream a trashed media item for preview |
GET |
/api/trash/media/{id}/thumbnail |
Stream a trashed thumbnail |
POST |
/api/trash/media/restore |
Restore trashed media. Body: ["uuid1", ...] |
POST |
/api/trash/media/dispose |
Permanently delete specific trashed media. Body: ["uuid1", ...] |
POST |
/api/trash/empty |
Permanently delete all currently trashed media |
POST |
/api/media/fix-thumbnails |
Trigger background repair of missing thumbnails/metadata |
POST |
/api/media/download/plan |
Create download plan (partitions large sets into <2GB parts). Body: ["uuid1", ...] |
GET |
/api/media/download/stream/{id} |
Stream a specific download part incrementally |
POST |
/api/media/download |
Simple batch download (if under 2GB). Body: ["uuid1", ...] |
GET |
/api/tags |
List all unique tags |
GET |
/api/tags/count |
Count auto-tags in current view |
POST |
/api/tags/learn |
Train model from manual tags. Body: {"tag_name": "..."} |
POST |
/api/tags/{id}/apply |
Apply learned tag model to a scope (optional folder_id) |
GET |
/api/folders |
List all folders with item counts |
POST |
/api/folders |
Create folder. Body: {"name": "..."} |
PUT |
/api/folders/reorder |
Reorder folders. Body: ordered folder IDs |
PUT |
/api/folders/{id} |
Rename folder |
DELETE |
/api/folders/{id} |
Delete folder (keeps media files) |
GET |
/api/folders/{id}/media |
Paginated media in folder |
POST |
/api/folders/{id}/media |
Add media to folder. Body: ["uuid1", ...] |
POST |
/api/folders/{id}/media/remove |
Remove media from folder |
GET |
/api/shares |
List shares (authenticated) |
POST |
/api/shares |
Create share. Body includes share_type, optional folder_id/media_ids, optional expires_in_days, optional allow_download (default false) |
PUT |
/api/shares/{id} |
Update share metadata (name, expires_in_days, allow_download) |
DELETE |
/api/shares/{id} |
Revoke/delete share |
GET |
/api/share/{token} |
Public share metadata + first page of media (no login required) |
GET |
/api/share/{token}/media |
Public paginated share media (no login required) |
GET |
/api/share/{token}/media/{id}/view |
Public full-size share preview endpoint (no login required) |
POST |
/api/share/{token}/download/plan |
Public download plan for a share (only when allow_download=true) |
GET |
/api/folders/{id}/download |
Get download plan for folder (auto-splits for large folders) |
GET |
/api/library/download |
Get download plan for the entire library (auto-splits) |
POST |
/api/library/purge |
Delete all files and data from the library |
GET |
/api/stats |
Server statistics (counts, trash count, storage, disk space) |
POST |
/api/login |
Authenticate. Body: {"password": "..."}. Creates unique server-tracked session. |
POST |
/api/logout |
Clear session. Invalidates specific client session on the server. |
GET |
/api/ws |
WebSocket for real-time library synchronization |
GET |
/api/auth-check |
Check authentication status |
Share URL route: GET /share/{token} serves the public share frontend view without authentication.
Note: browser preview endpoints can still render media inline; allow_download=false mainly blocks direct original-file static routes and archive download planning.
Contributions are welcome! Feel free to open an issue or submit a pull request.
- Fork the repository
- Ideally create a ticket first to outline the matter.
- Create your feature branch (
git checkout -b feature/123-my-feature) or bug fix branch (git checkout -b bug/123-my-feature) (replace123by the ticket number) - Commit your changes (
git commit -m 'Add my feature') - Add your tests, place them correctly (server code in
src/, frontend code underfrontend/) - Push to the branch (
git push origin feature/123-my-feature) - Open a Pull Request
You can use an agent but you have to honor the AGENTS.md file. You have to be careful and aware though that in a rather mathematical source code that the agents have
the tedency to randomly modify constants or mathematical expressions.
PolyForm Noncommercial License 1.0.0 means it is free for non-commercial purposes.




