Skip to content

vertexcover-io/research

Repository files navigation

Research projects carried out by AI tools

Each directory in this repo is a separate research project carried out by an LLM tool - usually Claude Code. Every single line of text and code was written by an LLM.

This setup is inspired by Simon Willison's simonw/research repo. See his post Code research projects with async coding agents like Claude Code and Codex for more details on how this workflow works.

I try to include prompts and links to transcripts in the PRs that added each report, or in the commits.

Times shown are in UTC.

2 research projects

No description available — auto-summary unavailable.

A survey of approaches for calling a Python library from a TypeScript project, using scenedetect as a representative stress test (CPU-heavy, OpenCV-dependent, large binary inputs). Nine integration patterns are compared — subprocess, FastAPI sidecar, gRPC, job queues, native embedding, Pyodide, serverless, JS-equivalent replacement, and porting — with detailed limitations covering cold-start cost, the GIL, payload size, proxy timeouts, and deployment complexity. The report ends with a decision matrix mapping situations to recommended approaches.

Key takeaways:

  • Long-running subprocess workers eliminate Python import overhead for repeated calls; one-shot subprocess is fine for scripts.
  • A FastAPI sidecar is the typical production answer, but watch out for the GIL blocking async endpoints and proxy timeouts on long jobs.
  • Pyodide is not viable for libraries with C extensions like OpenCV.
  • Always check whether a JS equivalent (e.g. ffmpeg scene filter) gets you 80% of the way before standing up a Python service.

Updating this README

This README uses cogapp to automatically generate project descriptions.

Automatic updates

A GitHub Action automatically runs cog -r -P README.md on every push to main and commits any changes to the README or new _summary.md files.

Manual updates

To update locally:

# Install dependencies
pip install -r requirements.txt

# Run cogapp to regenerate the project list
cog -r -P README.md

The script automatically:

  • Discovers all subdirectories in this folder
  • Gets the first commit date for each folder and sorts by most recent first
  • For each folder, checks if a _summary.md file exists
  • If the summary exists, it uses the cached version
  • If not, it generates a new summary using `llm -m github/gpt-4.1
` with a prompt that creates engaging descriptions with bullets and links
  • Creates markdown links to each project folder on GitHub
  • New summaries are saved to _summary.md to avoid regenerating them on every run

To regenerate a specific project's description, delete its _summary.md file and run cog -r -P README.md again.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors