Skip to content

ludusrusso/wildgecu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

83 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

WildGecu 🦎

Wild by nature, safe by design.

An open-source, multi-provider AI agent framework written in Go.

Why "WildGecu"?

In Salento β€” the sun-scorched heel of Italy's boot β€” the gecko is everywhere. You'll find it clinging to the ancient dry stone walls (muretti a secco), perched on the warm tufa of baroque churches, navigating crumbling farmhouses at dusk. Locals call it gecu, and it has been a symbol of this land for centuries: resilient, adaptable, quietly useful.

The Mediterranean house gecko (Tarentola mauritanica) β€” whose scientific name traces back to Taranto, the gateway to Salento β€” owes its remarkable abilities to a simple, elegant mechanism: millions of microscopic lamellae on its toe pads that exploit Van der Waals forces to grip any surface, at any angle, without glue or suction. It doesn't need permission to climb. It just holds on.

That's the idea behind WildGecu. An AI agent framework that attaches to any surface β€” Anthropic, OpenAI, Ollama, or whatever comes next β€” and doesn't let go. One that runs wild and free as open-source software, but is engineered to be safe, predictable, and secure by default. One that lives quietly in the background, like a gecko on a warm wall, doing its work without fuss.

Wild because it's free, open, and untamed by vendor lock-in. Gecu because every good project deserves a name that sounds like home.

What is WildGecu?

WildGecu is a modular AI agent framework in Go. It provides a reusable foundation for building autonomous agents with:

  • Multi-provider support β€” LLM-agnostic design behind a clean Provider interface (ships with Google Gemini, OpenAI, and Ollama)
  • Soul β€” persistent identity bootstrapped through a conversational interview, stored as Markdown
  • Memory β€” persistent context across sessions with automatic curation after each conversation
  • Skills β€” a plugin system with lazy-loaded Markdown-based definitions and YAML frontmatter
  • Cron jobs β€” an in-process scheduler with isolated sessions, powered by gocron
  • Parallel tool calling β€” concurrent execution of independent tool calls within the agent loop
  • Ephemeral subagents β€” delegate subtasks to child agents with isolated context, optional model override, and tool subsetting via the spawn_agent tool
  • Telegram bridge β€” daemon-based chat via Telegram bot
  • Self-update β€” the agent can update its own binary at runtime
  • Background daemon β€” long-running process with health checks, IPC socket, and system service support

No database required. File-based state. One binary. Your keys, your data, your gecko.

Demo

asciicast

How it works

WildGecu operates in three primary modes: Bootstrap, Chat, and Code.

wildgecu init:                          wildgecu chat / code:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  No SOUL.md β”‚                         β”‚ Load SOUL.mdβ”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜                         β”‚ + MEMORY.md β”‚
       β”‚                                β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β–Ό                                       β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Bootstrap TUI β”‚                     β”‚  Build system   β”‚
β”‚  (interview you)β”‚                     β”‚  prompt from    β”‚
β”‚                 β”‚                     β”‚  AGENT + SOUL   β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                     β”‚  + USER + MEM   β”‚
       β”‚                                β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β–Ό                                       β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Agent calls    β”‚              β–Ό                           β–Ό
β”‚  write_soul     β”‚      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  β†’ .wildgecu/   β”‚      β”‚    Chat TUI     β”‚         β”‚    Code TUI     β”‚
β”‚    SOUL.md      β”‚      β”‚  (normal mode)  β”‚         β”‚  (working dir)  β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β”‚                        β”‚                           β”‚
       β–Ό                        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
    Chat TUI                                  β”‚
                                              β–Ό
                                     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                     β”‚   Agent loop     β”‚
                                     β”‚  (generate β†’     β”‚
                                     β”‚   tool calls β†’   β”‚
                                     β”‚   generate)      β”‚
                                     β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                            β”‚
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚  spawn_agent (optional)   β”‚
                              β–Ό             β–Ό             β–Ό
                        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                        β”‚ Subagent β”‚ β”‚ Subagent β”‚ β”‚ Subagent β”‚
                        β”‚ (model A)β”‚ β”‚ (model B)β”‚ β”‚ (model A)β”‚
                        β”‚ isolated β”‚ β”‚ isolated β”‚ β”‚ isolated β”‚
                        β”‚ context  β”‚ β”‚ context  β”‚ β”‚ context  β”‚
                        β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜
                             β”‚            β”‚            β”‚
                             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                              text resultsβ”‚back to parent
                                          β–Ό
                                     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                     β”‚ Memory curation β”‚
                                     β”‚ β†’ .wildgecu/    β”‚
                                     β”‚   MEMORY.md     β”‚
                                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Bootstrap mode (wildgecu init): The init command starts an interactive interview where the agent asks about your agent's name, purpose, personality, and expertise. The agent receives a system prompt (BOOTSTRAP.md) that guides the conversation. After a few exchanges, it calls the write_soul tool to persist its identity in .wildgecu/SOUL.md. If SOUL.md already exists, the command exits with an error β€” delete it first to re-initialize.

Chat mode (wildgecu chat): The default conversational mode. The system prompt is assembled from the base behavior (AGENT.md), the agent's identity (SOUL.md), and persistent memory (MEMORY.md).

Code mode (wildgecu code): A specialized mode focused on development. The agent uses a different system prompt (CODE_AGENT.md) and is equipped with file-system tools (read, write, list, update) and a bash environment scoped to the current working directory.

Memory curation: After each session, a dedicated memory agent reviews the conversation and updates MEMORY.md β€” extracting key patterns, preferences, and context while keeping it concise.

Prerequisites

Getting started

1. Install

Clone and build the binary:

git clone https://github.com/ludusrusso/wildgecu.git
cd wildgecu
go build -o wildgecu .

Optionally move it to a directory on your $PATH:

mv wildgecu /usr/local/bin/

2. Run WildGecu

On first run, an interactive setup wizard guides you through provider selection, API key entry, and model choice:

wildgecu init

The wizard will:

  1. Ask you to choose a provider (Gemini, OpenAI, Ollama, Mistral, or Regolo)
  2. Prompt for your API key (if required by the provider) and validate it
  3. Let you pick a default model from a curated list or enter a custom one
  4. Write ~/.wildgecu/wildgecu.yaml and ~/.wildgecu/.env automatically

After setup, the init command continues into the bootstrap interview where the agent asks about its name, purpose, personality, and expertise, then persists its identity to .wildgecu/SOUL.md in your current working directory.

Tip: The setup wizard also triggers automatically on wildgecu start if no config exists yet.

3. Start chatting

wildgecu            # interactive chat (default command)
wildgecu code       # coding agent scoped to the current directory

Switch models at runtime with --model:

wildgecu --model openai/gpt-4o
wildgecu --model local              # uses the "local" alias β†’ ollama/llama3

Verify your setup

After completing the steps above, your file tree should look like this:

~/.wildgecu/                    # global home
β”œβ”€β”€ .env                        # API keys (created by setup wizard)
└── wildgecu.yaml               # config (created by setup wizard)

./.wildgecu/                    # project-local (in your working directory)
└── SOUL.md                     # agent identity (created by init)

If something is wrong, WildGecu will tell you β€” missing API keys, unknown providers, or invalid model references all produce clear error messages at startup.

Manual configuration (advanced)

If you prefer to skip the wizard and configure manually, create the files yourself before running any command.

.env file β€” store provider API keys:

cat > ~/.wildgecu/.env << 'EOF'
GEMINI_API_KEY=your-gemini-api-key
# OPENAI_API_KEY=your-openai-api-key
# MISTRAL_API_KEY=your-mistral-api-key
# REGOLO_API_KEY=your-regolo-api-key
# TELEGRAM_BOT_TOKEN=your-bot-token
EOF

Tip: Environment variables already set in your shell take priority over values in .env.

wildgecu.yaml β€” provider and model configuration:

Minimal setup (Gemini only)

providers:
  gemini:
    type: gemini
    api_key: env(GEMINI_API_KEY)

default_model: gemini/gemini-2.5-flash

Multi-provider setup

providers:
  gemini:
    type: gemini
    api_key: env(GEMINI_API_KEY)
    google_search: true

  openai:
    type: openai
    api_key: env(OPENAI_API_KEY)

  ollama:
    type: ollama # no API key needed, runs locally

models:
  fast: gemini/gemini-2.0-flash
  smart: gemini/gemini-2.5-pro
  local: ollama/llama3

default_model: gemini/gemini-2.5-flash

The env(VAR_NAME) syntax resolves values from your .env file or shell environment. If a referenced variable is missing, WildGecu exits with an error naming the unset variable.

CLI commands

WildGecu is a single binary. Chat is the default command; daemon management and specialized modes are available as subcommands.

# Bootstrap
wildgecu init         # create SOUL.md through an interactive interview

# Chat (default)
wildgecu              # interactive chat session
wildgecu chat         # same thing, explicit

# Code Mode
wildgecu code         # start a coding agent in the current directory

# Custom home directory
wildgecu --home /path/to/home start   # use a custom home instead of ~/.wildgecu
wildgecu --home /path/to/home chat    # all subcommands respect --home

# Daemon lifecycle
wildgecu start        # start the background daemon
wildgecu stop         # stop the daemon
wildgecu restart      # stop + start
wildgecu status       # show daemon status (pid, uptime, version)
wildgecu health       # exit 0 if daemon is healthy, 1 otherwise
wildgecu logs         # show last 50 log lines
wildgecu logs -f      # follow log output

# Cron jobs
wildgecu cron ls      # list all scheduled jobs
wildgecu cron add     # add a new cron job (interactive TUI)
wildgecu cron rm test # remove a cron job by name

# Skills
wildgecu skill ls     # list installed skills
wildgecu skill add    # add a new skill

# System service
wildgecu install      # install as a system service
wildgecu uninstall    # remove the system service

# Self-update
wildgecu update --url <binary-url>   # trigger a self-update

Build with a version tag:

go build -ldflags "-X wildgecu/cmd.Version=1.0.0" -o wildgecu .

Cron jobs

The daemon executes scheduled LLM prompts. Cron jobs are defined as markdown files with YAML frontmatter in ~/.wildgecu/crons/. Results are written to ~/.wildgecu/cron-results/.

Cron file format

---
name: daily-summary
cron: "0 9 * * *"
---

Summarize the key events from yesterday and suggest priorities for today.

The frontmatter requires name and cron (standard 5-field cron expression). Everything after the closing --- is the LLM prompt.

Skills

Skills are domain-specific knowledge files that the agent can load on demand. They are stored as markdown files with YAML frontmatter in ~/.wildgecu/skills/.

Skill file format

---
name: code-review
description: Guidelines for reviewing Go code
tags: [go, review]
---

When reviewing Go code, focus on...

The agent loads skills dynamically via the load_skill tool during conversation.

Ephemeral subagents

The agent can delegate subtasks to ephemeral subagents via the spawn_agent tool. A subagent is a short-lived child agent that runs in isolation β€” it receives a prompt, executes a full agent loop (generate β†’ tool calls β†’ generate), and returns a single text result to the parent. No SOUL, no MEMORY, no finalization. It lives within the parent session and is discarded when done.

Why subagents?

  • Cheaper models for simple work β€” delegate straightforward tasks (summarization, formatting, lookups) to a faster/cheaper model while keeping the expensive model for complex reasoning.
  • Focused context β€” a subagent gets a clean context window with an optional custom system prompt, so intermediate steps don't clutter the parent's conversation.
  • Parallel research β€” the parent can spawn multiple subagents simultaneously as concurrent tool calls, gathering information from different angles and synthesizing results.
  • Tool restriction β€” give a research subagent read-only tools, or give a coding subagent file-write access while restricting everything else.

spawn_agent tool

The tool is available in both chat and code modes. Parameters:

Parameter Required Description
prompt Yes The user message to send to the child agent
system_prompt No Custom system prompt. If omitted, uses a minimal default
model No Provider/model reference (e.g., gemini/gemini-2.0-flash). If omitted, inherits the parent's model
tools No List of tool names the child can use. If omitted, inherits all parent tools except spawn_agent

Recursion prevention: subagents cannot spawn further subagents. The spawn_agent tool is excluded from every child agent's tool set.

Example usage (from the agent's perspective)

The agent decides to delegate autonomously β€” the user doesn't need to manage subagents directly:

# Agent spawns a focused researcher with a cheaper model
spawn_agent(
  prompt: "List all exported functions in pkg/provider/tool/registry.go",
  model: "gemini/gemini-2.0-flash",
  tools: ["bash", "read_file", "list_files"]
)

# Agent spawns multiple subagents in parallel for research
spawn_agent(prompt: "Summarize the README.md", model: "gemini/gemini-2.0-flash")
spawn_agent(prompt: "List all TODO comments in the codebase", tools: ["bash"])

Configuration

WildGecu uses a unified home directory at ~/.wildgecu/ for all global state. Override it with --home:

wildgecu --home /path/to/custom/home start

This allows running multiple independent instances, each with its own config, socket, crons, and skills. The flag accepts absolute paths, relative paths, and ~/... tilde expansion.

Global files (~/.wildgecu/)

File / Directory Purpose
wildgecu.yaml Configuration (provider, API keys, model) β€” created on first run
.env Optional environment variables loaded at startup
wildgecu.pid Daemon PID file
wildgecu.sock Daemon Unix domain socket
wildgecu.log Daemon log file (JSON)
crons/ Cron job definitions (markdown + YAML frontmatter)
cron-results/ Output from executed cron jobs
skills/ Domain-specific knowledge files

Project files (.wildgecu/ in working directory)

File Purpose
SOUL.md Agent identity β€” created during bootstrap
MEMORY.md Persistent context β€” curated after each session
USER.md Optional user preferences β€” create manually

Delete SOUL.md and run wildgecu init again to give your agent a new identity.

Config file (wildgecu.yaml)

On first run, WildGecu creates a default ~/.wildgecu/wildgecu.yaml. Here is a full example showing all available options:

providers:
  gemini:
    type: gemini
    api_key: env(GEMINI_API_KEY)
    google_search: true # enable Gemini's Google Search grounding

  openai:
    type: openai
    api_key: env(OPENAI_API_KEY)

  ollama:
    type: ollama # base_url defaults to http://localhost:11434/v1

  mistral:
    type: mistral # base_url defaults to https://api.mistral.ai/v1
    api_key: env(MISTRAL_API_KEY)

  regolo:
    type: regolo # base_url defaults to https://api.regolo.ai/v1
    api_key: env(REGOLO_API_KEY)

  custom:
    type: openai
    api_key: env(CUSTOM_API_KEY)
    base_url: "https://my-provider.example.com/v1" # any OpenAI-compatible endpoint

models:
  fast: gemini/gemini-2.0-flash
  smart: gemini/gemini-2.5-pro
  local: ollama/llama3

default_model: gemini/gemini-2.5-flash # or use an alias: "fast"

telegram_token: env(TELEGRAM_BOT_TOKEN) # optional, for the Telegram bridge

Key concepts:

  • providers β€” a named map of LLM providers. Each entry requires a type field (gemini, openai, ollama, mistral, regolo). The name you give a provider is how you reference it elsewhere (e.g. gemini/gemini-2.5-flash means the provider named gemini, model gemini-2.5-flash).
  • models β€” optional aliases for provider/model pairs. Alias names must not contain /.
  • default_model β€” required. Can be a direct provider/model reference or an alias name.
  • telegram_token β€” optional. Token for the Telegram bot bridge.

env() syntax

Config values can reference environment variables using the env(VAR_NAME) syntax:

api_key: env(GEMINI_API_KEY) # resolved from the GEMINI_API_KEY env var
base_url: env(CUSTOM_URL) # works for base_url too
telegram_token: env(TG_TOKEN) # and for telegram_token
default_model: env(DEFAULT_MODEL) # and default_model

If the referenced variable is not set, WildGecu exits with an error naming the missing variable. This syntax works for api_key, base_url, telegram_token, and default_model fields.

.env file

WildGecu automatically loads a .env file from the home directory (~/.wildgecu/.env) at startup. This is a convenient alternative to exporting variables in your shell profile:

# ~/.wildgecu/.env
GEMINI_API_KEY=your-gemini-key
OPENAI_API_KEY=your-openai-key
TELEGRAM_BOT_TOKEN=your-bot-token

Precedence: environment variables already set in your shell take priority over values in the .env file. If the file does not exist, it is silently ignored.

Provider defaults

Some provider types have built-in default base URLs, so you don't need to specify base_url for them:

Type Default base_url
ollama http://localhost:11434/v1
mistral https://api.mistral.ai/v1
regolo https://api.regolo.ai/v1

You can always override these by setting base_url explicitly. The gemini and openai types use their respective SDK defaults and don't need a base URL.

Config file search order

The config file is loaded from ~/.wildgecu/wildgecu.yaml. You can also override the model at runtime:

wildgecu --model gpt-4o          # override with a provider/model reference
wildgecu --model fast             # or use a model alias

Architecture

wildgecu.go                  # Entry point β†’ cmd.Execute()
β”‚
β”œβ”€β”€ cmd/                     # CLI layer (Cobra)
β”‚
β”œβ”€β”€ pkg/                     # Core domain packages
β”‚   β”œβ”€β”€ agent/               # Agent orchestration (Prepare, Finalize, bootstrap, memory, prompts)
β”‚   β”‚   └── tools/           # Tool suites (general, exec, files, skills, subagent)
β”‚   β”œβ”€β”€ provider/            # LLM provider abstraction
β”‚   β”‚   β”œβ”€β”€ tool/            # Type-safe tool framework (Tool, Registry, schema generation)
β”‚   β”‚   β”œβ”€β”€ factory/         # Provider factory
β”‚   β”‚   β”œβ”€β”€ gemini/          # Google Gemini implementation
β”‚   β”‚   └── openai/          # OpenAI / Ollama implementation
β”‚   β”œβ”€β”€ session/             # Conversation management (RunTurn, RunTurnStream)
β”‚   β”œβ”€β”€ chat/                # Chat frontends (tui/, telegram/)
β”‚   β”œβ”€β”€ cron/                # Cron scheduling and execution
β”‚   β”œβ”€β”€ skill/               # Skills system (parse, load)
β”‚   └── daemon/              # Background daemon (socket, sessions, watchdog, updater, service)
β”‚
└── x/                       # General-purpose utilities (config, home, context, debug)

Key design decisions

  • Single binary β€” All commands (chat, daemon, cron, skills, service) are subcommands of one wildgecu binary.
  • pkg/ and x/ layout β€” Core domain packages live under pkg/, general-purpose utilities with no domain knowledge live under x/.
  • Unified home (~/.wildgecu/) β€” Config, PID, socket, logs, crons, and skills all live under one directory, managed by x/config. Overridable via --home for running multiple isolated instances.
  • x/config package β€” Zero-dependency (stdlib only) shared package that all other packages import for path resolution.
  • Project-local .wildgecu/ β€” Per-project identity files (SOUL.md, MEMORY.md, USER.md) stay in the working directory, separate from global daemon state.
  • Home abstraction β€” File operations are abstracted behind an interface (FSHome for disk, MemHome for tests), keeping the agent logic testable.
  • Parallel tool calling β€” Independent tool calls within a single agent turn are executed concurrently for lower latency.
  • Ephemeral subagents β€” The spawn_agent tool lets the agent delegate subtasks to isolated child agents with optional model override and tool subsetting. Subagents are stateless and cannot spawn further subagents.

Providers

WildGecu ships with two provider implementations that cover multiple services:

Provider Type Package Streaming Tool Calling API Key Required
Google Gemini gemini pkg/provider/gemini Yes Yes Yes
OpenAI openai pkg/provider/openai Yes Yes Yes
Ollama ollama pkg/provider/openai (shared) Yes Model-dependent No
Mistral mistral pkg/provider/openai (shared) Yes Yes Yes
Regolo regolo pkg/provider/openai (shared) Yes Yes Yes

Ollama, Mistral, and Regolo use the OpenAI-compatible implementation with their respective default base URLs. Any OpenAI-compatible endpoint can be used by setting type: openai with a custom base_url.

Adding a new provider

  1. Implement the provider.Provider interface:
type Provider interface {
    Generate(ctx context.Context, params *GenerateParams) (*Response, error)
}
  1. For streaming support, also implement StreamProvider:
type StreamProvider interface {
    Provider
    GenerateStream(ctx context.Context, params *GenerateParams) (<-chan StreamChunk, <-chan error)
}
  1. Register it in the factory at pkg/provider/factory/factory.go.

License

Apache 2.0 β€” see LICENSE for details.

About

🦎 Wild by nature, safe by design. A modular AI agent framework in Go with multi-provider LLM support, persistent identity and memory, skills, cron jobs, and subagents. No database, one binary.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages