Wild by nature, safe by design.
An open-source, multi-provider AI agent framework written in Go.
In Salento β the sun-scorched heel of Italy's boot β the gecko is everywhere. You'll find it clinging to the ancient dry stone walls (muretti a secco), perched on the warm tufa of baroque churches, navigating crumbling farmhouses at dusk. Locals call it gecu, and it has been a symbol of this land for centuries: resilient, adaptable, quietly useful.
The Mediterranean house gecko (Tarentola mauritanica) β whose scientific name traces back to Taranto, the gateway to Salento β owes its remarkable abilities to a simple, elegant mechanism: millions of microscopic lamellae on its toe pads that exploit Van der Waals forces to grip any surface, at any angle, without glue or suction. It doesn't need permission to climb. It just holds on.
That's the idea behind WildGecu. An AI agent framework that attaches to any surface β Anthropic, OpenAI, Ollama, or whatever comes next β and doesn't let go. One that runs wild and free as open-source software, but is engineered to be safe, predictable, and secure by default. One that lives quietly in the background, like a gecko on a warm wall, doing its work without fuss.
Wild because it's free, open, and untamed by vendor lock-in. Gecu because every good project deserves a name that sounds like home.
WildGecu is a modular AI agent framework in Go. It provides a reusable foundation for building autonomous agents with:
- Multi-provider support β LLM-agnostic design behind a clean
Providerinterface (ships with Google Gemini, OpenAI, and Ollama) - Soul β persistent identity bootstrapped through a conversational interview, stored as Markdown
- Memory β persistent context across sessions with automatic curation after each conversation
- Skills β a plugin system with lazy-loaded Markdown-based definitions and YAML frontmatter
- Cron jobs β an in-process scheduler with isolated sessions, powered by gocron
- Parallel tool calling β concurrent execution of independent tool calls within the agent loop
- Ephemeral subagents β delegate subtasks to child agents with isolated context, optional model override, and tool subsetting via the
spawn_agenttool - Telegram bridge β daemon-based chat via Telegram bot
- Self-update β the agent can update its own binary at runtime
- Background daemon β long-running process with health checks, IPC socket, and system service support
No database required. File-based state. One binary. Your keys, your data, your gecko.
WildGecu operates in three primary modes: Bootstrap, Chat, and Code.
wildgecu init: wildgecu chat / code:
βββββββββββββββ βββββββββββββββ
β No SOUL.md β β Load SOUL.mdβ
ββββββββ¬βββββββ β + MEMORY.md β
β ββββββββ¬βββββββ
βΌ β
βββββββββββββββββββ βββββββββββββββββββ
β Bootstrap TUI β β Build system β
β (interview you)β β prompt from β
β β β AGENT + SOUL β
ββββββββ¬βββββββββββ β + USER + MEM β
β ββββββββ¬βββββββββββ
βΌ β
βββββββββββββββββββ βββββββββββββββ΄ββββββββββββββ
β Agent calls β βΌ βΌ
β write_soul β βββββββββββββββββββ βββββββββββββββββββ
β β .wildgecu/ β β Chat TUI β β Code TUI β
β SOUL.md β β (normal mode) β β (working dir) β
ββββββββ¬βββββββββββ ββββββββ¬βββββββββββ ββββββββ¬βββββββββββ
β β β
βΌ βββββββββββββββ¬ββββββββββββββ
Chat TUI β
βΌ
βββββββββββββββββββ
β Agent loop β
β (generate β β
β tool calls β β
β generate) β
ββββββββ¬βββββββββββ
β
βββββββββββββββΌββββββββββββββ
β spawn_agent (optional) β
βΌ βΌ βΌ
ββββββββββββ ββββββββββββ ββββββββββββ
β Subagent β β Subagent β β Subagent β
β (model A)β β (model B)β β (model A)β
β isolated β β isolated β β isolated β
β context β β context β β context β
ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ
β β β
ββββββββββββββΌβββββββββββββ
text resultsβback to parent
βΌ
βββββββββββββββββββ
β Memory curation β
β β .wildgecu/ β
β MEMORY.md β
βββββββββββββββββββ
Bootstrap mode (wildgecu init): The init command starts an interactive interview where the agent asks about your agent's name, purpose, personality, and expertise. The agent receives a system prompt (BOOTSTRAP.md) that guides the conversation. After a few exchanges, it calls the write_soul tool to persist its identity in .wildgecu/SOUL.md. If SOUL.md already exists, the command exits with an error β delete it first to re-initialize.
Chat mode (wildgecu chat): The default conversational mode. The system prompt is assembled from the base behavior (AGENT.md), the agent's identity (SOUL.md), and persistent memory (MEMORY.md).
Code mode (wildgecu code): A specialized mode focused on development. The agent uses a different system prompt (CODE_AGENT.md) and is equipped with file-system tools (read, write, list, update) and a bash environment scoped to the current working directory.
Memory curation: After each session, a dedicated memory agent reviews the conversation and updates MEMORY.md β extracting key patterns, preferences, and context while keeping it concise.
- Go 1.26+
- At least one LLM provider:
- Google Gemini API key
- OpenAI API key
- Ollama running locally (no API key needed)
- Mistral API key
- Regolo API key
Clone and build the binary:
git clone https://github.com/ludusrusso/wildgecu.git
cd wildgecu
go build -o wildgecu .Optionally move it to a directory on your $PATH:
mv wildgecu /usr/local/bin/On first run, an interactive setup wizard guides you through provider selection, API key entry, and model choice:
wildgecu initThe wizard will:
- Ask you to choose a provider (Gemini, OpenAI, Ollama, Mistral, or Regolo)
- Prompt for your API key (if required by the provider) and validate it
- Let you pick a default model from a curated list or enter a custom one
- Write
~/.wildgecu/wildgecu.yamland~/.wildgecu/.envautomatically
After setup, the init command continues into the bootstrap interview where the agent asks about its name, purpose, personality, and expertise, then persists its identity to .wildgecu/SOUL.md in your current working directory.
Tip: The setup wizard also triggers automatically on
wildgecu startif no config exists yet.
wildgecu # interactive chat (default command)
wildgecu code # coding agent scoped to the current directorySwitch models at runtime with --model:
wildgecu --model openai/gpt-4o
wildgecu --model local # uses the "local" alias β ollama/llama3After completing the steps above, your file tree should look like this:
~/.wildgecu/ # global home
βββ .env # API keys (created by setup wizard)
βββ wildgecu.yaml # config (created by setup wizard)
./.wildgecu/ # project-local (in your working directory)
βββ SOUL.md # agent identity (created by init)
If something is wrong, WildGecu will tell you β missing API keys, unknown providers, or invalid model references all produce clear error messages at startup.
If you prefer to skip the wizard and configure manually, create the files yourself before running any command.
.env file β store provider API keys:
cat > ~/.wildgecu/.env << 'EOF'
GEMINI_API_KEY=your-gemini-api-key
# OPENAI_API_KEY=your-openai-api-key
# MISTRAL_API_KEY=your-mistral-api-key
# REGOLO_API_KEY=your-regolo-api-key
# TELEGRAM_BOT_TOKEN=your-bot-token
EOFTip: Environment variables already set in your shell take priority over values in
.env.
wildgecu.yaml β provider and model configuration:
providers:
gemini:
type: gemini
api_key: env(GEMINI_API_KEY)
default_model: gemini/gemini-2.5-flashproviders:
gemini:
type: gemini
api_key: env(GEMINI_API_KEY)
google_search: true
openai:
type: openai
api_key: env(OPENAI_API_KEY)
ollama:
type: ollama # no API key needed, runs locally
models:
fast: gemini/gemini-2.0-flash
smart: gemini/gemini-2.5-pro
local: ollama/llama3
default_model: gemini/gemini-2.5-flashThe env(VAR_NAME) syntax resolves values from your .env file or shell environment. If a referenced variable is missing, WildGecu exits with an error naming the unset variable.
WildGecu is a single binary. Chat is the default command; daemon management and specialized modes are available as subcommands.
# Bootstrap
wildgecu init # create SOUL.md through an interactive interview
# Chat (default)
wildgecu # interactive chat session
wildgecu chat # same thing, explicit
# Code Mode
wildgecu code # start a coding agent in the current directory
# Custom home directory
wildgecu --home /path/to/home start # use a custom home instead of ~/.wildgecu
wildgecu --home /path/to/home chat # all subcommands respect --home
# Daemon lifecycle
wildgecu start # start the background daemon
wildgecu stop # stop the daemon
wildgecu restart # stop + start
wildgecu status # show daemon status (pid, uptime, version)
wildgecu health # exit 0 if daemon is healthy, 1 otherwise
wildgecu logs # show last 50 log lines
wildgecu logs -f # follow log output
# Cron jobs
wildgecu cron ls # list all scheduled jobs
wildgecu cron add # add a new cron job (interactive TUI)
wildgecu cron rm test # remove a cron job by name
# Skills
wildgecu skill ls # list installed skills
wildgecu skill add # add a new skill
# System service
wildgecu install # install as a system service
wildgecu uninstall # remove the system service
# Self-update
wildgecu update --url <binary-url> # trigger a self-updateBuild with a version tag:
go build -ldflags "-X wildgecu/cmd.Version=1.0.0" -o wildgecu .The daemon executes scheduled LLM prompts. Cron jobs are defined as markdown files with YAML frontmatter in ~/.wildgecu/crons/. Results are written to ~/.wildgecu/cron-results/.
---
name: daily-summary
cron: "0 9 * * *"
---
Summarize the key events from yesterday and suggest priorities for today.The frontmatter requires name and cron (standard 5-field cron expression). Everything after the closing --- is the LLM prompt.
Skills are domain-specific knowledge files that the agent can load on demand. They are stored as markdown files with YAML frontmatter in ~/.wildgecu/skills/.
---
name: code-review
description: Guidelines for reviewing Go code
tags: [go, review]
---
When reviewing Go code, focus on...The agent loads skills dynamically via the load_skill tool during conversation.
The agent can delegate subtasks to ephemeral subagents via the spawn_agent tool. A subagent is a short-lived child agent that runs in isolation β it receives a prompt, executes a full agent loop (generate β tool calls β generate), and returns a single text result to the parent. No SOUL, no MEMORY, no finalization. It lives within the parent session and is discarded when done.
- Cheaper models for simple work β delegate straightforward tasks (summarization, formatting, lookups) to a faster/cheaper model while keeping the expensive model for complex reasoning.
- Focused context β a subagent gets a clean context window with an optional custom system prompt, so intermediate steps don't clutter the parent's conversation.
- Parallel research β the parent can spawn multiple subagents simultaneously as concurrent tool calls, gathering information from different angles and synthesizing results.
- Tool restriction β give a research subagent read-only tools, or give a coding subagent file-write access while restricting everything else.
The tool is available in both chat and code modes. Parameters:
| Parameter | Required | Description |
|---|---|---|
prompt |
Yes | The user message to send to the child agent |
system_prompt |
No | Custom system prompt. If omitted, uses a minimal default |
model |
No | Provider/model reference (e.g., gemini/gemini-2.0-flash). If omitted, inherits the parent's model |
tools |
No | List of tool names the child can use. If omitted, inherits all parent tools except spawn_agent |
Recursion prevention: subagents cannot spawn further subagents. The spawn_agent tool is excluded from every child agent's tool set.
The agent decides to delegate autonomously β the user doesn't need to manage subagents directly:
# Agent spawns a focused researcher with a cheaper model
spawn_agent(
prompt: "List all exported functions in pkg/provider/tool/registry.go",
model: "gemini/gemini-2.0-flash",
tools: ["bash", "read_file", "list_files"]
)
# Agent spawns multiple subagents in parallel for research
spawn_agent(prompt: "Summarize the README.md", model: "gemini/gemini-2.0-flash")
spawn_agent(prompt: "List all TODO comments in the codebase", tools: ["bash"])
WildGecu uses a unified home directory at ~/.wildgecu/ for all global state. Override it with --home:
wildgecu --home /path/to/custom/home startThis allows running multiple independent instances, each with its own config, socket, crons, and skills. The flag accepts absolute paths, relative paths, and ~/... tilde expansion.
| File / Directory | Purpose |
|---|---|
wildgecu.yaml |
Configuration (provider, API keys, model) β created on first run |
.env |
Optional environment variables loaded at startup |
wildgecu.pid |
Daemon PID file |
wildgecu.sock |
Daemon Unix domain socket |
wildgecu.log |
Daemon log file (JSON) |
crons/ |
Cron job definitions (markdown + YAML frontmatter) |
cron-results/ |
Output from executed cron jobs |
skills/ |
Domain-specific knowledge files |
| File | Purpose |
|---|---|
SOUL.md |
Agent identity β created during bootstrap |
MEMORY.md |
Persistent context β curated after each session |
USER.md |
Optional user preferences β create manually |
Delete SOUL.md and run wildgecu init again to give your agent a new identity.
On first run, WildGecu creates a default ~/.wildgecu/wildgecu.yaml. Here is a full example showing all available options:
providers:
gemini:
type: gemini
api_key: env(GEMINI_API_KEY)
google_search: true # enable Gemini's Google Search grounding
openai:
type: openai
api_key: env(OPENAI_API_KEY)
ollama:
type: ollama # base_url defaults to http://localhost:11434/v1
mistral:
type: mistral # base_url defaults to https://api.mistral.ai/v1
api_key: env(MISTRAL_API_KEY)
regolo:
type: regolo # base_url defaults to https://api.regolo.ai/v1
api_key: env(REGOLO_API_KEY)
custom:
type: openai
api_key: env(CUSTOM_API_KEY)
base_url: "https://my-provider.example.com/v1" # any OpenAI-compatible endpoint
models:
fast: gemini/gemini-2.0-flash
smart: gemini/gemini-2.5-pro
local: ollama/llama3
default_model: gemini/gemini-2.5-flash # or use an alias: "fast"
telegram_token: env(TELEGRAM_BOT_TOKEN) # optional, for the Telegram bridgeKey concepts:
providersβ a named map of LLM providers. Each entry requires atypefield (gemini,openai,ollama,mistral,regolo). The name you give a provider is how you reference it elsewhere (e.g.gemini/gemini-2.5-flashmeans the provider namedgemini, modelgemini-2.5-flash).modelsβ optional aliases forprovider/modelpairs. Alias names must not contain/.default_modelβ required. Can be a directprovider/modelreference or an alias name.telegram_tokenβ optional. Token for the Telegram bot bridge.
Config values can reference environment variables using the env(VAR_NAME) syntax:
api_key: env(GEMINI_API_KEY) # resolved from the GEMINI_API_KEY env var
base_url: env(CUSTOM_URL) # works for base_url too
telegram_token: env(TG_TOKEN) # and for telegram_token
default_model: env(DEFAULT_MODEL) # and default_modelIf the referenced variable is not set, WildGecu exits with an error naming the missing variable. This syntax works for api_key, base_url, telegram_token, and default_model fields.
WildGecu automatically loads a .env file from the home directory (~/.wildgecu/.env) at startup. This is a convenient alternative to exporting variables in your shell profile:
# ~/.wildgecu/.env
GEMINI_API_KEY=your-gemini-key
OPENAI_API_KEY=your-openai-key
TELEGRAM_BOT_TOKEN=your-bot-tokenPrecedence: environment variables already set in your shell take priority over values in the .env file. If the file does not exist, it is silently ignored.
Some provider types have built-in default base URLs, so you don't need to specify base_url for them:
| Type | Default base_url |
|---|---|
ollama |
http://localhost:11434/v1 |
mistral |
https://api.mistral.ai/v1 |
regolo |
https://api.regolo.ai/v1 |
You can always override these by setting base_url explicitly. The gemini and openai types use their respective SDK defaults and don't need a base URL.
The config file is loaded from ~/.wildgecu/wildgecu.yaml. You can also override the model at runtime:
wildgecu --model gpt-4o # override with a provider/model reference
wildgecu --model fast # or use a model aliaswildgecu.go # Entry point β cmd.Execute()
β
βββ cmd/ # CLI layer (Cobra)
β
βββ pkg/ # Core domain packages
β βββ agent/ # Agent orchestration (Prepare, Finalize, bootstrap, memory, prompts)
β β βββ tools/ # Tool suites (general, exec, files, skills, subagent)
β βββ provider/ # LLM provider abstraction
β β βββ tool/ # Type-safe tool framework (Tool, Registry, schema generation)
β β βββ factory/ # Provider factory
β β βββ gemini/ # Google Gemini implementation
β β βββ openai/ # OpenAI / Ollama implementation
β βββ session/ # Conversation management (RunTurn, RunTurnStream)
β βββ chat/ # Chat frontends (tui/, telegram/)
β βββ cron/ # Cron scheduling and execution
β βββ skill/ # Skills system (parse, load)
β βββ daemon/ # Background daemon (socket, sessions, watchdog, updater, service)
β
βββ x/ # General-purpose utilities (config, home, context, debug)
- Single binary β All commands (chat, daemon, cron, skills, service) are subcommands of one
wildgecubinary. pkg/andx/layout β Core domain packages live underpkg/, general-purpose utilities with no domain knowledge live underx/.- Unified home (
~/.wildgecu/) β Config, PID, socket, logs, crons, and skills all live under one directory, managed byx/config. Overridable via--homefor running multiple isolated instances. x/configpackage β Zero-dependency (stdlib only) shared package that all other packages import for path resolution.- Project-local
.wildgecu/β Per-project identity files (SOUL.md,MEMORY.md,USER.md) stay in the working directory, separate from global daemon state. - Home abstraction β File operations are abstracted behind an interface (
FSHomefor disk,MemHomefor tests), keeping the agent logic testable. - Parallel tool calling β Independent tool calls within a single agent turn are executed concurrently for lower latency.
- Ephemeral subagents β The
spawn_agenttool lets the agent delegate subtasks to isolated child agents with optional model override and tool subsetting. Subagents are stateless and cannot spawn further subagents.
WildGecu ships with two provider implementations that cover multiple services:
| Provider | Type | Package | Streaming | Tool Calling | API Key Required |
|---|---|---|---|---|---|
| Google Gemini | gemini |
pkg/provider/gemini |
Yes | Yes | Yes |
| OpenAI | openai |
pkg/provider/openai |
Yes | Yes | Yes |
| Ollama | ollama |
pkg/provider/openai (shared) |
Yes | Model-dependent | No |
| Mistral | mistral |
pkg/provider/openai (shared) |
Yes | Yes | Yes |
| Regolo | regolo |
pkg/provider/openai (shared) |
Yes | Yes | Yes |
Ollama, Mistral, and Regolo use the OpenAI-compatible implementation with their respective default base URLs. Any OpenAI-compatible endpoint can be used by setting type: openai with a custom base_url.
- Implement the
provider.Providerinterface:
type Provider interface {
Generate(ctx context.Context, params *GenerateParams) (*Response, error)
}- For streaming support, also implement
StreamProvider:
type StreamProvider interface {
Provider
GenerateStream(ctx context.Context, params *GenerateParams) (<-chan StreamChunk, <-chan error)
}- Register it in the factory at
pkg/provider/factory/factory.go.
Apache 2.0 β see LICENSE for details.