A professional AI-powered image editor that transforms raster images into editable vector layers using advanced segmentation technology.
- AI-Powered Segmentation: Automatic object detection and segmentation using SAM2
- Vector Layer Editing: Convert segmented objects into editable SVG paths
- Professional UI: Figma/Photoshop-quality interface with modern design
- Real-time Editing: Move, resize, recolor, and transform individual objects
- Export Options: Export as SVG or PSD for further editing
- Drag & Drop: Intuitive file upload with drag-and-drop support
- React 18 with TypeScript
- Fabric.js for canvas manipulation and vector editing
- TailwindCSS with custom design system
- Framer Motion for smooth animations
- shadcn/ui component library
- FastAPI for high-performance API
- SAM2 for AI-powered image segmentation
- OpenCV for image processing
- PIL/Pillow for image manipulation
- SVG/PSD export capabilities
- React 18 + TypeScript
- Vite (build tool)
- TailwindCSS + shadcn/ui
- Fabric.js (canvas library)
- Framer Motion (animations)
- React Query (state management)
- Python 3.11+
- FastAPI (web framework)
- SAM2 (segmentation model)
- OpenCV (computer vision)
- PIL/Pillow (image processing)
- Docker (containerization)
- Node.js 18+
- Python 3.11+
- Git
-
Clone the repository:
git clone <repository-url> cd maskmaker-studio-main
-
Start development environment:
Windows:
start-dev.bat
Linux/Mac:
chmod +x start-dev.sh ./start-dev.sh
-
Access the application:
- Frontend: http://localhost:8080
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/docs
npm install
npm run devcd backend
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
python main.pymaskmaker-studio-main/
├── src/ # Frontend source code
│ ├── components/
│ │ ├── editor/ # Editor components
│ │ │ ├── Canvas.tsx # Fabric.js canvas
│ │ │ ├── LayersSidebar.tsx # Layer management
│ │ │ ├── PropertiesSidebar.tsx # Properties panel
│ │ │ ├── Toolbar.tsx # Top toolbar
│ │ │ └── UploadArea.tsx # File upload
│ │ ├── ui/ # shadcn/ui components
│ │ └── ImageEditor.tsx # Main editor component
│ ├── pages/ # Page components
│ └── types/ # TypeScript definitions
├── backend/ # Backend API
│ ├── api/ # API endpoints
│ │ ├── upload.py # Upload endpoints
│ │ ├── segments.py # Segmentation endpoints
│ │ └── export.py # Export endpoints
│ ├── core/ # Core modules
│ │ ├── config.py # Configuration
│ │ ├── schemas.py # Pydantic models
│ │ └── sam2_processor.py # SAM2 integration
│ ├── main.py # FastAPI app
│ └── requirements.txt # Python dependencies
├── docker-compose.yml # Docker setup
└── README.md # This file
POST /api/upload- Upload image fileGET /api/upload/{file_id}/info- Get upload info
GET /api/segments/{image_id}- Get AI segmentsGET /api/segments/{image_id}/status- Check status
POST /api/export- Export as SVG/PSDGET /api/export/{export_id}/download- Download file
- Upload Image: Drag and drop or select a PNG/JPEG image
- AI Segmentation: Click "Generate AI Segments" to detect objects
- Edit Layers: Select layers to edit colors, opacity, and transforms
- Export: Choose SVG or PSD format for download
Run with Docker Compose:
docker-compose up- Real SAM2 Integration: Replace mock segmentation with actual SAM2
- Advanced Editing: Add more transform tools and filters
- Cloud Storage: Integrate with AWS S3 or similar
- User Accounts: Add authentication and project saving
- Batch Processing: Process multiple images at once
- AI Inpainting: Fill missing areas with AI-generated content
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License.
- Meta AI for SAM2 (Segment Anything Model 2)
- Fabric.js for canvas manipulation
- shadcn/ui for beautiful components
- FastAPI for the excellent web framework