This guide explains how to apply code-mint to a target repository in a way that stays understandable over a long, multi-session onboarding effort.
The key idea: track progress by proven outcomes, not by how many procedural steps have been completed.
- Copy code-mint into the target repository.
- Run onboarding in assessment-first mode.
- Record progress in
docs/onboarding-checklist.md. - Prove one outcome at a time with explicit evidence.
- Split the work into small PRs instead of one giant transformation.
Copy the onboarding bundle into the target repo using the Quick Start in the root README.md (TARGET_REPO / TARGET_SCOPE, git clone of this library, cp of skills and core docs, .agents/code-mint-status.json, .gitignore for reports). If the target repository already has .agents/, AGENTS.md, rules, or customized skills, merge deliberately rather than overwriting them.
Before asking the user to do much setup work, the onboarding flow should make the six north-star outcomes and their proof criteria obvious. Full definitions are in docs/outcomes.md; docs/onboarding-checklist.md is the system of record for status and evidence.
Open the folder that matches your onboarding scope when you can. If you work from the git repository root instead, state the scope in the prompt so paths for docs/, .agents/, and AGENTS.md stay unambiguous. Example:
Use the meta--onboarding skill to assess this repository for AI-first development, keep docs/onboarding-checklist.md updated as the system of record, summarize the findings, and wait for approval before making changes.
When onboarding only a package or subdirectory, add a line such as: Onboarding scope: [path relative to repo root]. Treat paths as relative to that directory. Also read the git repository root README.md and any README.md files along the path from the repo root to that scope for repo-wide setup, deploy, CI, or environment notes that may not exist under the scope.
Start with a read-only baseline:
- Inspect the repository first.
- Ask only the minimum scoping questions that the code cannot answer.
- Run all applicable Phase 1 auditors in parallel.
- Summarize what is working, blocked, risky, and next to prove.
- Record the result in
docs/onboarding-checklist.md.
Typical baseline artifacts:
.agents/reports/legibility--auditor-audit.md.agents/reports/autonomy--test-readiness-auditor-audit.md.agents/reports/autonomy--env-auditor-audit.mdwhen applicable.agents/reports/autonomy--runtime-auditor-audit.mdwhen applicable.agents/reports/autonomy--sre-auditor-audit.mdwhen applicable.agents/reports/onboarding-summary.md
Once the baseline exists, make the codebase legible:
- Create or improve the root
AGENTS.md. - Add subdirectory
AGENTS.mdfiles for high-value modules. - Capture UX intent, module boundaries, and gotchas.
- Prove the outcome by having the agent explain where work should happen for a sample task and recording that grounded answer.
Next, make verification practical:
- Identify the smallest relevant test command for a real module or behavior.
- Improve test targeting, isolation, or utilities as needed.
- Record the exact command, covered scope, and pass/fail signal in the checklist.
Then make the runtime meaningfully testable:
- Choose the safest meaningful local or staging-like target.
- Document dependencies, startup order, env assumptions, and stop conditions.
- Define one trusted, non-destructive smoke path.
- Mark the outcome
Provenonly when the evidence includes a concrete success signal.
Add a real debugging proof:
- Start from a real reported issue or representative failure.
- Turn it into a deterministic failing test or repro recipe another person or agent can rerun.
- Capture that failing case as the evidence artifact.
If operational tooling exists, prove the agent can inspect it:
- confirm CLI and monitoring access
- gather logs, metrics, traces, or CI evidence
- produce a ranked hypothesis grounded in that evidence and record the next actions
If no operational tooling exists, mark the outcome N/A and record why.
During onboarding, prefer this loop:
- Inspect first.
- Draft what the codebase already reveals.
- Ask the user only for missing intent, operational context, or hidden assumptions.
- Confirm small drafts before writing durable docs.
- Update the checklist after every meaningful proof.
This keeps the process collaborative without turning it into a long interview.
Use small, reviewable PRs:
- Phase 1: baseline reports and checklist initialization
- Phase 2:
AGENTS.mdand repo-legibility work - Phase 3: self-test and smoke-path improvements
- Phase 4: reproduction and SRE investigation workflows
- Phase 5: verification refresh and optional activation of ongoing skills
These PR phases align with the playbook phases in the meta--onboarding skill (assessment → navigation → self-test/smoke → bug repro/SRE → verify/activate). Use the skill for step-by-step operator detail; use this section when splitting work across pull requests.
These elements should be tailored to the target repository:
- root and subdirectory
AGENTS.mdfiles - project-specific
[CUSTOMIZE]blocks inside skills - runtime and smoke-path details
- test commands and priority modules
- infrastructure identifiers, log groups, profiles, and workflow names
These elements are usually best kept stable:
- the outcome-first onboarding structure
- the auditor/creator pairing model
- the reproduce-before-fix discipline
- the progressive disclosure approach for documentation
To compare local customized skills against the latest code-mint version:
cd "$TARGET_REPO"
git clone https://github.com/patterninc/code-mint.git .code-mint-update
diff -ru "$TARGET_SCOPE/.agents/skills" .code-mint-update/.agents/skillsIf you also customized copied onboarding docs under docs/ (for example framework.md or onboarding-checklist.md), compare those paths the same way, for example diff -ru "$TARGET_SCOPE/docs" .code-mint-update/docs, or selectively re-copy files from .code-mint-update/docs/ after reviewing the diff.
If you forked code-mint to your own organization, replace the clone URL above with your fork's URL.
Preserve local [CUSTOMIZE] values when adopting upstream changes.
Designate a Rule Steward or equivalent owner to review changes to:
.agents/.agents/code-mint-status.jsonAGENTS.mdSKILL.md- onboarding docs and checklist templates
Measure success by outcomes, not just activity:
- can it baseline the current state with explicit evidence?
- can it explain where work should happen for a representative task?
- can it run the smallest relevant automated check and trust the result?
- can it execute a smoke path with a concrete success signal?
- can it turn a reported issue into a deterministic repro?
- can it gather operational evidence into a ranked hypothesis?