fix(stats): track ImageGen / VideoGen costs in usage insights#13
Merged
Conversation
Before this, only chat LLM calls flowed through `recordUsage()`. The ImageGen and VideoGen tools wrote to the content library but never reported to the global stats file, so: - `franklin insights` undercounted total 30-day spend by the exact amount spent on image + video generation. - The "Top models" breakdown never surfaced `openai/gpt-image-1`, `bytedance/seedance-2.0`, `xai/grok-imagine-video`, etc. even when they were the dominant cost in a session. - Monthly projections in insights were similarly low. On successful generation each tool now fires a fire-and-forget `recordUsage(model, 0, 0, costUsd, 0)`. Input/output tokens stay at 0 because image/video models don't bill by token; the cost is pulled from the live gateway catalog via `findModel` + `estimateCostUsd`, the same path the AskUser cost preview uses. When the catalog lookup fails the tools fall back to hardcoded estimates so stats still get an entry (under-approximate rather than missing). The stats write is intentionally fire-and-forget and error-swallowing: a `~/.blockrun/stats.json` write failure must not turn a paid generation into a user-visible error. Scope is deliberately small — only the two tools, +33 lines, zero schema or behavior change to the existing tracker / insights engine. Any record already written by chat flows continues to work unchanged.
KillerQueen-Z
added a commit
that referenced
this pull request
Apr 28, 2026
Before this, only chat LLM calls flowed through `recordUsage()`. The ImageGen and VideoGen tools wrote to the content library but never reported to the global stats file, so: - `franklin insights` undercounted total 30-day spend by the exact amount spent on image + video generation. - The "Top models" breakdown never surfaced `openai/gpt-image-1`, `bytedance/seedance-2.0`, `xai/grok-imagine-video`, etc. even when they were the dominant cost in a session. - Monthly projections in insights were similarly low. On successful generation each tool now fires a fire-and-forget `recordUsage(model, 0, 0, costUsd, 0)`. Input/output tokens stay at 0 because image/video models don't bill by token; the cost is pulled from the live gateway catalog via `findModel` + `estimateCostUsd`, the same path the AskUser cost preview uses. When the catalog lookup fails the tools fall back to hardcoded estimates so stats still get an entry (under-approximate rather than missing). The stats write is intentionally fire-and-forget and error-swallowing: a `~/.blockrun/stats.json` write failure must not turn a paid generation into a user-visible error. Scope is deliberately small — only the two tools, +33 lines, zero schema or behavior change to the existing tracker / insights engine. Any record already written by chat flows continues to work unchanged.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Before this, only chat LLM calls flowed through
recordUsage(). TheImageGenandVideoGentools wrote to the content library but never reported to the global stats file, so:franklin insightsundercounted total 30-day spend by the exact amount spent on image + video generation.openai/gpt-image-1,bytedance/seedance-2.0,xai/grok-imagine-video, etc. even when they were the dominant cost in a session.Changes
On successful generation each tool now fires a fire-and-forget
recordUsage(model, 0, 0, costUsd, 0):findModel+estimateCostUsd, the same path the AskUser cost preview uses, so the recorded amount matches the amount the user was quoted.$0.05/s) so stats still get an entry — under-approximate is better than missing.Design notes
~/.blockrun/stats.jsonwrite failure must not turn a paid generation into a user-visible error, and stats accuracy matters less than not losing the output.recordUsagesignature handlesinputTokens=0+outputTokens=0+ explicitcostUsdfine; nothing in the stats tracker or the insights engine needs to learn about media models specifically.Test plan
franklin insights→ note baselineTotal CostandTop Modelslist.franklin insightsagain.Total Costshould increase by ~$0.0158 (0.015 + 5% gateway margin).Top Modelsshould contain a new row forzai/cogview-4with 1 request.Verified locally on Base chain. The VS Code extension's Insights panel (which reads the same
stats.json) also immediately reflected the new entries.Related
stats.jsonis consumed by the VS Code extension's Insights panel (viagenerateInsights), so this fix lands media costs in both the CLI and the extension with a single change.