Skip to content

Update coverage check script#362

Merged
jjk-g merged 1 commit intokubernetes-sigs:mainfrom
jjk-g:cov
Apr 2, 2026
Merged

Update coverage check script#362
jjk-g merged 1 commit intokubernetes-sigs:mainfrom
jjk-g:cov

Conversation

@jjk-g
Copy link
Copy Markdown
Collaborator

@jjk-g jjk-g commented Mar 12, 2026

Add detailed flag that shows per-file coverage numbers. More robust error observability.

Example detailed output

$ python3 scripts/check_coverage_regression.py --detailed
ℹ️  Using existing baseline: coverage_main.json (pass --force to regenerate)
ℹ️  Using existing current report: coverage.json (pass --force to regenerate)

--- Detailed Per-File Coverage Comparison ---
File                                                 Baseline    Current       Diff
-----------------------------------------------------------------------------------
inference_perf/analysis/analyze.py                      5.88%      5.88%     +0.00%
inference_perf/apis/base.py                            81.48%     81.48%     +0.00%
inference_perf/apis/chat.py                            36.07%     36.07%     +0.00%
inference_perf/apis/completion.py                      33.93%     33.93%     +0.00%
inference_perf/apis/user_session.py                    43.40%     43.40%     +0.00%
inference_perf/circuit_breaker/base.py                 68.42%     68.42%     +0.00%
inference_perf/circuit_breaker/config.py              100.00%    100.00%     +0.00%
inference_perf/circuit_breaker/simple_breaker.py       39.39%     39.39%     +0.00%
inference_perf/circuit_breaker/triggers/base.py        78.57%     78.57%     +0.00%
inference_perf/circuit_breaker/triggers/config.py     100.00%    100.00%     +0.00%
inference_perf/circuit_breaker/triggers/consecutive.py     47.06%     47.06%     +0.00%
inference_perf/circuit_breaker/triggers/rate_over_window.py     36.67%     36.67%     +0.00%
inference_perf/client/filestorage/base.py              76.92%     76.92%     +0.00%
inference_perf/client/filestorage/gcs.py               35.29%     35.29%     +0.00%
inference_perf/client/filestorage/local.py             52.17%     52.17%     +0.00%
inference_perf/client/filestorage/s3.py                33.33%     33.33%     +0.00%
inference_perf/client/metricsclient/base.py            94.03%     94.03%     +0.00%
inference_perf/client/metricsclient/mock_client.py     63.64%     63.64%     +0.00%
inference_perf/client/metricsclient/prometheus_client/base.py     25.68%     25.68%     +0.00%
inference_perf/client/metricsclient/prometheus_client/google_managed_prometheus_client.py     43.48%     43.48%     +0.00%
inference_perf/client/modelserver/base.py              88.72%     88.72%     +0.00%
inference_perf/client/modelserver/mock_client.py       41.18%     41.18%     +0.00%
inference_perf/client/modelserver/openai_client.py     43.95%     43.95%     +0.00%
inference_perf/client/modelserver/sglang_client.py     73.33%     73.33%     +0.00%
inference_perf/client/modelserver/tgi_client.py        73.33%     73.33%     +0.00%
inference_perf/client/modelserver/vllm_client.py       73.33%     73.33%     +0.00%
inference_perf/client/requestdatacollector/base.py     78.57%     78.57%     +0.00%
inference_perf/client/requestdatacollector/local.py     66.67%     66.67%     +0.00%
inference_perf/client/requestdatacollector/multiprocess.py     42.50%     42.50%     +0.00%
inference_perf/config.py                               93.18%     93.18%     +0.00%
inference_perf/datagen/base.py                         78.43%     78.43%     +0.00%
inference_perf/datagen/cnn_dailymail_datagen.py        24.62%     24.62%     +0.00%
inference_perf/datagen/hf_billsum_datagen.py           23.53%     23.53%     +0.00%
inference_perf/datagen/hf_sharegpt_datagen.py          26.58%     26.58%     +0.00%
inference_perf/datagen/infinity_instruct_datagen.py     19.23%     19.23%     +0.00%
inference_perf/datagen/mock_datagen.py                 44.00%     44.00%     +0.00%
inference_perf/datagen/random_datagen.py               69.57%     69.57%     +0.00%
inference_perf/datagen/shared_prefix_datagen.py        20.41%     20.41%     +0.00%
inference_perf/datagen/synthetic_datagen.py            34.15%     34.15%     +0.00%
inference_perf/loadgen/load_generator.py               19.31%     19.31%     +0.00%
inference_perf/loadgen/load_timer.py                   45.45%     45.45%     +0.00%
inference_perf/logger.py                              100.00%    100.00%     +0.00%
inference_perf/main.py                                 17.22%     17.22%     +0.00%
inference_perf/reportgen/base.py                       38.02%     38.02%     +0.00%
inference_perf/utils/custom_tokenizer.py               50.00%     50.00%     +0.00%
inference_perf/utils/distribution.py                   23.53%     23.53%     +0.00%
inference_perf/utils/report_file.py                    58.33%     58.33%     +0.00%
inference_perf/utils/request_queue.py                  36.67%     36.67%     +0.00%
inference_perf/utils/trace_reader.py                   71.25%     71.25%     +0.00%

--- Coverage Summary ---
Main Branch:    44.72%
Current Branch: 44.72%
✅ PASS: Coverage is maintained or improved.

Add detailed flag that shows per-file coverage numbers.
More robust error observability.
@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Mar 12, 2026
@jjk-g
Copy link
Copy Markdown
Collaborator Author

jjk-g commented Mar 12, 2026

Sorted list

File	Coverage	Status
🟠 inference_perf/analysis/analyze.py	5.88%	Low
🟠 inference_perf/main.py	17.22%	Low
🟠 inference_perf/datagen/infinity_instruct_datagen.py	19.23%	Low
🟠 inference_perf/loadgen/load_generator.py	19.31%	Low
🟠 inference_perf/datagen/shared_prefix_datagen.py	20.41%	Low
🟠 inference_perf/datagen/hf_billsum_datagen.py	23.53%	Low
🟠 inference_perf/utils/distribution.py	23.53%	Low
🟠 inference_perf/datagen/cnn_dailymail_datagen.py	24.62%	Low
🟠 inference_perf/client/metricsclient/prometheus_client/base.py	25.68%	Low
🟠 inference_perf/datagen/hf_sharegpt_datagen.py	26.58%	Low
🟠 inference_perf/client/filestorage/s3.py	33.33%	Low
🟠 inference_perf/apis/completion.py	33.93%	Low
🟠 inference_perf/datagen/synthetic_datagen.py	34.15%	Low
🟠 inference_perf/client/filestorage/gcs.py	35.29%	Low
🟠 inference_perf/apis/chat.py	36.07%	Low
🟠 inference_perf/circuit_breaker/triggers/rate_over_window.py	36.67%	Low
🟠 inference_perf/utils/request_queue.py	36.67%	Low
🟠 inference_perf/reportgen/base.py	38.02%	Low
🟠 inference_perf/circuit_breaker/simple_breaker.py	39.39%	Low
🟠 inference_perf/client/modelserver/mock_client.py	41.18%	Low
🟠 inference_perf/client/requestdatacollector/multiprocess.py	42.50%	Low
🟠 inference_perf/apis/user_session.py	43.40%	Low
🟠 inference_perf/client/metricsclient/prometheus_client/google_managed_prometheus_client.py	43.48%	Low
🟠 inference_perf/client/modelserver/openai_client.py	43.95%	Low
🟠 inference_perf/datagen/mock_datagen.py	44.00%	Low
🟠 inference_perf/loadgen/load_timer.py	45.45%	Low
🟠 inference_perf/circuit_breaker/triggers/consecutive.py	47.06%	Low
🟠 inference_perf/utils/custom_tokenizer.py	50.00%	Low
🟡 inference_perf/client/filestorage/local.py	52.17%	Moderate
🟡 inference_perf/utils/report_file.py	58.33%	Moderate
🟡 inference_perf/client/metricsclient/mock_client.py	63.64%	Moderate
🟡 inference_perf/client/requestdatacollector/local.py	66.67%	Moderate
🟡 inference_perf/circuit_breaker/base.py	68.42%	Moderate
🟡 inference_perf/datagen/random_datagen.py	69.57%	Moderate
🟡 inference_perf/utils/trace_reader.py	71.25%	Moderate
🟡 inference_perf/client/modelserver/sglang_client.py	73.33%	Moderate
🟡 inference_perf/client/modelserver/tgi_client.py	73.33%	Moderate
🟡 inference_perf/client/modelserver/vllm_client.py	73.33%	Moderate
🟡 inference_perf/client/filestorage/base.py	76.92%	Moderate
🟡 inference_perf/datagen/base.py	78.43%	Moderate
🟡 inference_perf/circuit_breaker/triggers/base.py	78.57%	Moderate
🟡 inference_perf/client/requestdatacollector/base.py	78.57%	Moderate
🟢 inference_perf/apis/base.py	81.48%	High
🟢 inference_perf/client/modelserver/base.py	88.72%	High
🟢 inference_perf/config.py	93.18%	High
🟢 inference_perf/client/metricsclient/base.py	94.03%	High
🟢 inference_perf/circuit_breaker/config.py	100.00%	High
🟢 inference_perf/circuit_breaker/triggers/config.py	100.00%	High
🟢 inference_perf/logger.py	100.00%	High

@achandrasekar
Copy link
Copy Markdown
Contributor

Looks good overall. Is this script already running as a part of the code coverage pre-commit action? Or will it be added in a separate change?

@Bslabe123
Copy link
Copy Markdown
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 12, 2026
@achandrasekar
Copy link
Copy Markdown
Contributor

/approve

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: achandrasekar, jjk-g

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [achandrasekar,jjk-g]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@achandrasekar
Copy link
Copy Markdown
Contributor

Do you want to merge this? I see some overlap with the other PR #367

@jjk-g jjk-g merged commit 59860cc into kubernetes-sigs:main Apr 2, 2026
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants