Please fill this in before submitting. This helps our reviewers identify your submission.
| Field | Your answer |
|---|---|
| Name | |
| GitHub | |
| Date submitted |
Build a Task Tracker with AI Features — a full-stack application for managing projects and tasks, enhanced with LLM-powered capabilities. The data model, test suite, mock LLM service, and infrastructure are all provided. Your job is to implement the API endpoints, AI integration, business logic, and a functional UI.
Time expectation: Please submit within 24 hours. Focus on clean, working code over polish.
- Database schema (Drizzle ORM) with migrations and seed data
- Docker Compose with PostgreSQL + a mock LLM service — no external APIs or keys needed
- Mock LLM service — an OpenAI-compatible chat completions API that returns deterministic responses (see
mock-llm/server.tsfor behavior) - Pre-written test suite (54 tests) — your implementation must pass all of them
- API route scaffolds with detailed JSDoc describing expected behavior and Drizzle hints
- LLM client scaffold (
lib/llm.ts) — implement the wrapper around the mock LLM - TypeScript types including valid status transitions
- Page scaffolds for
app/page.tsx(project list) andapp/projects/[id]/page.tsx(project detail) with TODO comments - Tailwind CSS + shadcn/ui configured for styling
+---------------+
| Mock LLM |
| Service |
| :11434 |
+-------^-------+
|
AI Endpoints
|
+---------------+ +-------+-------+
| PostgreSQL |<----------| Next.js |
| (Drizzle) | | App |
| :5433 | | :3002 |
+---------------+ +---------------+
Implement the route handlers in app/api/:
Projects
GET /api/projects— List projects with optional status filter, include task countsPOST /api/projects— Create project (unique name required)GET /api/projects/:id— Get project with task counts by statusPATCH /api/projects/:id— Update project fieldsDELETE /api/projects/:id— Delete project (blocked if tasks are in_progress or in_review)
Tasks
GET /api/tasks— List tasks with filters (status, priority, assignee, projectId) and paginationPOST /api/tasks— Create task (must reference valid project)GET /api/tasks/:id— Get task with parent project infoPATCH /api/tasks/:id— Update task (status changes must follow valid transitions)DELETE /api/tasks/:id— Delete task
Business Logic:
- Task status transitions are enforced. See
lib/types.tsfor the transition map. - Cannot delete a project with tasks in
in_progressorin_reviewstatus - Project names must be unique
Key files:
lib/schema.ts— Drizzle schema (tables, enums, relations, inferred types)lib/db.ts— Database client (drizzle + postgres.js)lib/types.ts— Status transition map and API types
Implement the LLM client and AI endpoints:
LLM Client (lib/llm.ts)
- Implement
chatCompletion()to call the mock LLM's OpenAI-compatible API - Handle errors: service unreachable, invalid responses, timeouts
- The mock LLM runs at
$LLM_BASE_URL/v1/chat/completions
AI Endpoints
POST /api/ai/categorize— Auto-categorize a task (bug, feature, improvement, documentation) and update its labelsPOST /api/ai/summarize— Generate a project status summary from its tasksPOST /api/ai/suggest-priority— Suggest priority level for a task description
The mock LLM returns deterministic responses based on keywords in the system prompt and user content. Read mock-llm/server.ts to understand the response patterns — your system prompts must include specific keywords to trigger the correct response type.
Build a functional UI using Next.js App Router patterns:
App Router Requirements:
- File-system routing: Use
app/page.tsxfor the projects list andapp/projects/[id]/page.tsxfor project detail. Scaffolds for both pages are provided. - Server Components: The default — use them for data fetching and layout. Only add
'use client'to components that need interactivity (forms, dialogs, click handlers). Linknavigation: Usenext/linkfor client-side navigation between pages (e.g., project cards link to/projects/[id]).- Async params: In Next.js 16, dynamic route params are async:
const { id } = await params.
UI features to include:
- Everything from the core task tracker (projects, tasks, CRUD, filters)
- A way to trigger AI categorization on tasks
- Display AI-suggested priorities when creating tasks
- Show project summaries powered by the LLM
Complete the questions in SOLUTION_DESIGN.md. These focus on production AI architecture, security, and system design at scale.
# Install dependencies
bun install
# Copy environment file
cp .env.example .env
# Start PostgreSQL and Mock LLM
docker compose up -d
# Wait for services to be healthy, then run migrations and seed
bun run db:migrate
bun run db:seed
# Start the dev server
bun run devThe app is available at http://localhost:3002
# Health check
curl http://localhost:11434/health
# Test a categorization request
curl -X POST http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "system", "content": "Categorize the following task."},
{"role": "user", "content": "Fix the login bug on mobile"}
]
}'# Run the full test suite (requires db + mock-llm running)
bun run test
# Run linting and type checks
bun run lint
bun run typecheck
# Run everything (typecheck + lint + tests)
bun run check# Reset database (drop and re-migrate)
bun run db:reset
# Then re-seed
bun run db:seed
# Generate a new migration after schema changes
bun run db:generate- Click "Use this template" on GitHub to create a private copy of this repo
- Fill in the Candidate Info table at the top of this README
- Implement all API routes and the LLM client (replace the
TODOstubs) - Build the dashboard UI with AI features
- Answer the questions in
SOLUTION_DESIGN.md - Ensure all 54 tests pass, typecheck is clean, and lint passes
- Add the following GitHub users as collaborators on your repo (how to add collaborators):
naodya(Naod — Engineering)juliusoh(Julius — Engineering)
- Send the repo link to your BLEN recruiting contact
| Area | Weight | What We Look For |
|---|---|---|
| Tests passing | 30% | All 54 tests green |
| Code quality | 25% | Clean TypeScript, separation of concerns, lint/typecheck clean |
| AI integration quality | 20% | Error handling, prompt design, LLM client robustness |
| Solution design | 15% | Production thinking, security awareness, trade-offs |
| UI implementation | 10% | Functional, includes AI features, well-structured |
If anything is unclear, reach out. We'd rather you ask than guess.
Good luck!