Omi captures your screen and conversations, transcribes in real-time, generates summaries and action items, and gives you an AI chat that remembers everything you've seen and heard. Works on desktop, phone and wearables. Fully open source.
Trusted by 300,000+ professionals.
git clone https://github.com/BasedHardware/omi.git && cd omi/desktop && ./run.sh --yoloBuilds the macOS app, connects to the cloud backend, and launches. No env files, no credentials, no local backend.
Requirements: macOS 14+, Xcode (includes Swift & code signing), Node.js
For local development with the full backend stack:
# 1. Install prerequisites
xcode-select --install
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# 2. Clone and configure
git clone https://github.com/BasedHardware/omi.git
cd omi/desktop
cp Backend-Rust/.env.example Backend-Rust/.env
# 3. Build and run (starts Rust backend + auth + Cloudflare tunnel + Swift app)
./run.shSee desktop/README.md for environment variables and credential setup.
cd app && bash setup.sh ios # or: bash setup.sh android┌─────────────────────────────────────────────────────────┐
│ Your Devices │
│ │
│ ┌──────────┐ ┌──────────────┐ ┌───────────────────┐ │
│ │ Omi │ │ macOS App │ │ Mobile App │ │
│ │ Wearable │ │ (Swift/Rust) │ │ (Flutter) │ │
│ └────┬─────┘ └──────┬───────┘ └────────┬──────────┘ │
│ │ BLE │ HTTPS/WS │ │
└───────┼────────────────┼───────────────────┼─────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────┐
│ Omi Backend (Python) │
│ │
│ ┌─────────┐ ┌──────────┐ ┌─────────┐ ┌──────────┐ │
│ │ Listen │ │ Pusher │ │ VAD │ │ Diarizer │ │
│ │ (REST) │ │ (WS) │ │ (GPU) │ │ (GPU) │ │
│ └─────────┘ └──────────┘ └─────────┘ └──────────┘ │
│ │
│ ┌─────────┐ ┌──────────┐ ┌─────────┐ ┌──────────┐ │
│ │ Deepgram│ │ Firestore│ │ Redis │ │ LLMs │ │
│ │ (STT) │ │ (DB) │ │ (Cache) │ │ (AI) │ │
│ └─────────┘ └──────────┘ └─────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────┘
| Component | Path | Stack |
|---|---|---|
| macOS app | desktop/ |
Swift, SwiftUI, Rust backend |
| Mobile app | app/ |
Flutter (iOS & Android) |
| Backend API | backend/ |
Python, FastAPI, Firebase |
| Firmware | omi/ |
nRF, Zephyr, C |
| Omi Glass | omiGlass/ |
ESP32-S3, C |
| SDKs | sdks/ |
React Native, Swift, Python |
| AI Personas | web/personas-open-source/ |
Next.js |
- Download the Omi app and create a webhook at webhook.site
- In the app: Explore → Create an App → Select Capability → Paste Webhook URL → Install
- Start speaking — real-time transcript appears on webhook.site
See the full guide.
- API Reference — REST endpoints for memories, conversations, action items
- Python SDK
- Swift SDK
- React Native SDK
- MCP Server — Model Context Protocol integration
- App Development Guide
- Example Apps — GitHub, Slack, OmiMentor
- Audio Streaming Apps
- Custom Chat Tools
- Submit to App Store
Open-source AI wearables that pair with the mobile app for 24h+ continuous capture.
- Buy Omi Dev Kit — nRF, BLE, coin cell battery
- Buy Omi Glass Dev Kit — ESP32-S3, camera + audio
- Open Source Hardware Designs
MIT — see LICENSE


