Beta version! Use it at your own risk.
Core concept: AI agents creating and training AI agents for users' needs.
A low-code programming framework and GUI visual editor for AI assistants to help users solve business and engineering tasks by composing specific workflows from units/pipilines, and configure/perform training via GUI and chat conversation—minimizing hand-written code.
- Language agnostic graph: the Graph is capable of carring units written in any language.
- Native runtime: Python-based graph execution.
- External runtimes (workflow conversion compatibility): Node-RED, Pyflow, ComFy, n8n, etc. You can drop in an external workflow as is, modify and export back. Use the external runtime "roundtrip" feature for RL training.
- Offline local models (no external API is required)
- Sustainable memory and RAG knowledge base
- Workflow Designer to create/modify workflows, generate custom units (if allowed), make integrations.
- RL Coach to train/fine-tune models.
1. Install (from repo root)
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt2. Open the Constructor GUI (Flet)
Desktop app: workflow graph (canvas), training config, run/test, and AI chat (Workflow Designer / RL Coach).
pip install -r gui/requirements.txt
python -m gui.main- Workflow: Load or paste a process graph (Node-RED/PyFlow/n8n/YAML); edit on canvas; run workflow, report, grep, GitHub from chat.
- Training: Load/edit training config (goal, rewards, callbacks); run training or test a saved model.
- Chat: Talk to Workflow Designer (graph edits) or RL Coach (training config); edits are applied to graph or config.
3. Train from the command line (optional)
python runtime/train.py --config config/examples/training_config.yamlUse --process-config for a custom process graph; use --checkpoint to resume. All behavior is driven by the config files the assistants (or you) produce.
4. Test a trained model
python scripts/test_model.py ./models/temperature-control-agent/best/best_modelFor a visual tank demo and manual sliders (thermodynamic example):
python -m environments.custom.thermodynamics.water_tank_simulator --config config/examples/training_config.yaml --model ./models/temperature-control-agent/best/best_modelYou can run the app (and optionally the Ollama LLM server) in Docker. The image includes the full stack: main app, RAG, Flet GUI, and units (e.g. web_search). Works with classic Docker (e.g. 2022) and newer BuildKit. If you hit No space left on device during build, free disk space or set TMPDIR or PIP_CACHE_DIR to a directory on a larger drive before running docker build.
Build and run with Docker Compose (app + Ollama)
From the repo root:
docker compose build
docker compose upThen open the Flet GUI in your browser at http://localhost:8550. The app is configured to use the Ollama service automatically via OLLAMA_HOST.
Pull a model in Ollama (one-time):
docker compose exec ollama ollama pull llama3.2Models are stored in a persistent volume (ollama_data).
Build and run the app image only
docker build -t ai-taskvector .
docker run --rm -p 8550:8550 -e FLET_WEB=1 -e FLET_SERVER_PORT=8550 ai-taskvectorOpen http://localhost:8550. If Ollama runs on your host, point the app at it with:
docker run --rm -p 8550:8550 -e OLLAMA_HOST=http://host.docker.internal:11434 ai-taskvector flet run gui/main.py --web -p 8550Environment variables
| Variable | Description |
|---|---|
OLLAMA_HOST |
Ollama server URL (default: http://127.0.0.1:11434). In Compose, set to http://ollama:11434. |
OLLAMA_MODEL |
Default model name (e.g. llama3.2) when not set in GUI settings. |
OLLAMA_API_KEY |
Optional; for Ollama Cloud. |
Files
Dockerfile— Full install (main + RAG + Flet GUI + units); default command runs the Flet GUI.docker-compose.yml— App + Ollama service; Flet runs in web mode on port 8550.
Apply assistant edits (workflows, same units as in-app chat):
- Graph:
gui.components.workflow_tab.workflows.core_workflows.run_apply_editsthenrun_normalize_graphon the result (workflowgui/components/workflow_tab/workflows/core_workflows/apply_edits_single.json), or run that workflow viaruntime.run.run_workflowwithinitial_inputsforinject_graph,inject_edits,inject_origin. Seescripts/test_assistants.py. - Training config:
gui.components.workflow_tab.workflows.core_workflows.run_apply_training_config_edits(workflowgui/components/workflow_tab/workflows/core_workflows/apply_training_config_edits_single.json). Generic runner:python -m runtime <workflow.json> --initial-inputs @inputs.json— seeruntime/README.md.
Dependencies are split across the root requirements.txt (full stack), gui/requirements.txt (Flet UI on top of the base install), and optional extras. Below maps areas of the product to the notable libraries declared there (and in rag/requirements.txt for RAG).
| Area | Notable libraries | Declared in |
|---|---|---|
| Core (schemas, configs, graphs) | Pydantic, PyYAML, NumPy, Pandas; scikit-learn, Matplotlib where analytics/plotting are used | requirements.txt |
| Runtime (workflows, units, servers) | FastAPI, Uvicorn (LLM inference / ASGI); Requests, websocket-client (HTTP/WS and external adapters) | requirements.txt |
| Training (RL) | PyTorch, Gymnasium, Stable-Baselines3 (with extras), TensorBoard; tqdm, rich (CLI progress/logging); asteval, rule-engine (reward formula DSL and rule evaluation) | requirements.txt |
| GUI | Flet, flet-code-editor (workflow/code views) | gui/requirements.txt (install after requirements.txt) |
| RAG | LlamaIndex (llama-index, Hugging Face embeddings, Chroma vector store), ChromaDB, sentence-transformers, Docling (PDF/DOC/XLS ingestion) |
rag/requirements.txt (optional; pip install -r rag/requirements.txt) |
| Assistants / chat | ollama (Python client to a local Ollama server; install models with the Ollama app / CLI separately) | requirements.txt |
Optional unit extras (not in the root file): units/web/requirements.txt — DuckDuckGo search, BeautifulSoup4, html2text, minify-html. units/semantics/requirements.txt overlaps with root (lingua-language-detector, markdown-it-py, Pygments) for offline language detection and markdown rendering in units.
Install order (typical): pip install -r requirements.txt, then pip install -r gui/requirements.txt, then optionally pip install -r rag/requirements.txt and the unit extras above if you need those features.
MIT — use and modify for your projects.