Skip to content

feat(askAi): no-issue: add Ask AI chatbot#9505

Open
julianam-w wants to merge 46 commits intomainfrom
feat/ask-ai/add-ask-ai-chatbot
Open

feat(askAi): no-issue: add Ask AI chatbot#9505
julianam-w wants to merge 46 commits intomainfrom
feat/ask-ai/add-ask-ai-chatbot

Conversation

@julianam-w
Copy link
Copy Markdown
Contributor

@julianam-w julianam-w commented Apr 4, 2026

Changes

feat(askAi): no-issue: add Ask AI chatbot feature
Adds a RAG-powered AI chatbot to Tamanu that allows clinical staff to ask
questions about how Tamanu works. Uses Voyage AI for embeddings, a PostgreSQL
RAG database for context retrieval, and Claude via BAML for answer generation.

  • Database migrations and models for AskAiConversation and AskAiMessage
  • AskAiService with hybrid vector + full-text RAG search (top 10 results)
  • API routes on both facility and central servers under /ask-ai/conversations
  • BAML schema and generated client for structured LLM responses
  • Floating chat panel in the web sidebar footer (toggle on/off)
  • Markdown rendering for AI responses via react-markdown
  • Sources omitted for tamanu namespace (codebase paths not useful to end users)
  • Prompt instructs AI to note facility vs central server availability

Auto-Deploy

  • Deploy
Options
  • Synthetic test

Tests

  • Run E2E Tests

Review Hero

  • Run Review Hero
  • Auto-fix review suggestions Wait for Review Hero to finish, resolve any comments you disagree with or want to fix manually, then check this to auto-fix the rest.
  • Auto-fix CI failures Check this to auto-fix lint errors, test failures, and other CI issues.
  • Auto-merge upstream Check this to merge the base branch into this PR, with AI conflict resolution if needed.

Remember to...

  • ...write or update tests
  • ...add UI screenshots and testing notes to the Linear issue
  • ...add any manual upgrade steps to the Linear issue
  • ...update the config reference, settings reference, or any relevant runbook(s)
  • ...call out additions or changes to config files for the deployment team to take note of

@julianam-w julianam-w requested a review from a team as a code owner April 4, 2026 21:12
Comment thread packages/shared/src/baml_src/baml_client/type_builder.ts Fixed
Comment thread packages/shared/src/baml_src/baml_client/watchers.ts Fixed
@julianam-w julianam-w changed the title Feat/ask ai/add ask ai chatbot feat(ask ai): no-issue: add Ask AI chatbot Apr 4, 2026
Comment thread packages/shared/src/services/AskAiService.ts
Comment thread packages/shared/src/services/AskAiService.ts Outdated
Comment thread packages/central-server/app/askAi.js Outdated
Comment thread packages/database/src/migrations/1772000000001-createAskAiTables.ts
Comment thread packages/shared/src/services/AskAiService.ts
Comment thread packages/facility-server/app/routes/apiv1/askAi.js Outdated
Comment thread packages/central-server/app/askAi.js Outdated
Comment thread packages/shared/src/services/AskAiService.ts Outdated
Comment thread packages/shared/src/services/AskAiService.ts Outdated
Comment thread packages/shared/src/services/AskAiService.ts Outdated
@review-hero
Copy link
Copy Markdown

review-hero Bot commented Apr 4, 2026

🦸 Review Hero Summary
15 agents reviewed this PR | 8 critical | 8 suggestions | 0 nitpicks | Filtering: consensus 3 voters, 7 below threshold

Below consensus threshold (7 unique issues not confirmed by majority)
Location Agent Severity Comment
packages/central-server/app/askAi.js:66 Performance suggestion GET /conversations/:id fetches all messages in a conversation with no limit. Long-running conversations will grow unbounded. The chat() function already caps history to 20 messages for the LLM, b...
packages/central-server/app/askAi.js:88 Security suggestion No input validation on req.body.content before passing it to the AI service. There is no length limit, so a user could send an extremely large message body that gets stored in the database, sent ...
packages/database/src/models/AskAiConversation.ts:1 BES Requirements suggestion The migration defines a deleted_at column on conversations, but the model does not set paranoid: true. Either add paranoid: true to the model options (so Sequelize uses soft deletes with the ...
packages/shared/src/services/AskAiService.ts:92 BES Requirements suggestion The RAG query references ${namespace}_code and ${namespace}_docs (unqualified table names), but the migration creates the rag schema and the ops doc says the tables are rag.tamanu_code and ...
packages/shared/src/services/AskAiService.ts:92 Performance suggestion The namespace variable is interpolated directly into table names (${namespace}_code, ${namespace}_docs) without any validation. While this is a config value (not user input), it means the que...
packages/shared/src/services/AskAiService.ts:213 Bugs & Correctness suggestion response.clarifyingQuestion || response.answer will treat an empty string '' as falsy. Since the BAML schema defines clarifyingQuestion as a string that is "Empty string if not needed", a nor...
packages/web/app/components/AskAi/AskAiPanel.jsx:1 Design & Architecture suggestion This 400-line component handles state management (conversation lifecycle, message list, loading), API calls, drag/resize behaviour, markdown rendering, and all the styled components in a single fil...
Local fix prompt (copy to your coding agent)

Fix these issues identified on the pull request. One commit per issue fixed.


packages/shared/src/services/AskAiService.ts:185: process.env.ANTHROPIC_API_KEY = anthropicApiKey mutates the global process environment on every request. In a concurrent server, simultaneous requests could race on this value — though currently all requests use the same key, this is a fragile pattern. Instead, pass the API key via BAML's env option: b.AskTamanu(..., { env: { ANTHROPIC_API_KEY: anthropicApiKey } }).


packages/shared/src/services/AskAiService.ts:95: SQL injection: namespace is interpolated directly into the SQL query (FROM ${namespace}_code, FROM ${namespace}_docs). Although the namespace currently comes from server config, this is a system boundary where parameterised queries are required per project rules. Validate namespace against an allowlist (e.g. /^[a-z_]+$/) or quote it as an identifier.


packages/central-server/app/askAi.js:27: All Ask AI endpoints use req.flagPermissionChecked() without any actual permission check via req.ability.can(). Per coding rules, all API endpoints must have permission checks — no TODO or placeholder permission checks. Any authenticated user can access any other user's conversations if they guess the UUID (the userId filter mitigates listing, but consider adding a proper permission like req.checkPermission('read', 'AskAiConversation')).


packages/database/src/migrations/1772000000001-createAskAiTables.ts:50: The conversation_id foreign key on messages has no onDelete clause, so it defaults to RESTRICT. The DELETE route in askAi.js calls conversation.destroy() which will fail with a FK constraint error if the conversation has any messages. Add onDelete: 'CASCADE' to the FK reference so messages are cleaned up when a conversation is deleted.


packages/shared/src/services/AskAiService.ts:209: The user message and assistant response are saved as two separate create() calls outside a transaction. If the second create fails, the conversation is left in an inconsistent state (user message saved, assistant response lost). Wrap both creates in a sequelize.transaction() block per project conventions.


packages/web/app/components/AskAi/AskAiPanel.jsx:294: The placeholder text 'Ask a question about how Tamanu works.' and all other user-facing strings in this component are hardcoded English. Per project rules, user-facing strings must use TranslatedText, not hardcoded English. This applies to 'Thinking…', 'Ask a question…', 'Something went wrong. Please try again.', 'Note: This answer may need verification', source labels, etc.


packages/facility-server/app/routes/apiv1/askAi.js:92: req.settings[req.facilityId]?.getFrontEndSettings() — if the facility server serves multiple facilities and the request's req.facilityId is not set (e.g. the ask-ai routes don't seem to require a facility selection in the middleware chain), this will silently return undefined, and JSON.stringify(undefined) returns undefined (not a string), which will be passed to the LLM as the literal string 'undefined'. Verify that req.facilityId is always populated for these routes, or add a fallback.


packages/shared/src/services/AskAiService.ts:117: The RRF (Reciprocal Rank Fusion) query is incorrect. ROW_NUMBER() is computed over the full result set of the FULL OUTER JOIN, not over each individual source CTE. When a row exists only in vector_search (not in fts_search), f.score is NULL, yet ROW_NUMBER() OVER (ORDER BY f.score DESC) still assigns it a rank among all rows — producing meaningless fusion scores. The ROW_NUMBER should be computed inside the CTEs before joining, e.g. add ROW_NUMBER() OVER (ORDER BY score DESC) AS rank to each CTE and then compute 1.0/(60+v.rank) + 1.0/(60+f.rank) in the join.


packages/shared/src/baml_src/baml_client/async_client.ts:1: The entire baml_client/ directory (~1,200 lines of generated code) is committed to the repo despite being auto-generated and already in .gitignore. The .gitignore entry and the baml:generate build step suggest these files should NOT be checked in — they'll cause noisy diffs on every BAML version bump. Remove the committed files and rely solely on the build-time generation.


packages/shared/src/services/AskAiService.ts:57: The RAG Sequelize connection cache (ragDbCache) is a module-level Map with no eviction, no connection pool limit, and no shutdown hook. If ragDatabaseUrl ever changes (config reload, rotation), the stale connection persists forever. Consider either creating the connection once at startup and passing it in, or at minimum providing a close/dispose function and capping the cache to one entry.


packages/web/app/App.jsx:99: The Ask AI FAB and panel are unconditionally rendered for every logged-in user regardless of whether the feature is enabled on the server or whether the user has permission. This adds UI clutter and a dependency (react-markdown, react-draggable, re-resizable) to every session. Gate the FAB behind a feature flag or server config check (e.g. a lightweight /ask-ai/status endpoint or a settings value), and consider lazy-loading the panel component so the bundle cost is only paid when the feature is active.


packages/facility-server/app/routes/apiv1/askAi.js:1: The facility-server and central-server route files are near-identical (~127 lines each). This is a significant DRY violation — the only difference is how appSettings is accessed (req.settings.getFrontEndSettings() vs req.settings[req.facilityId]?.getFrontEndSettings()). Extract the shared route handlers into @tamanu/shared (or a shared helper) and have each server supply only its settings accessor, rather than maintaining two copies that will inevitably drift.


packages/central-server/app/askAi.js:46: The GET /conversations endpoint loads all conversations for a user with no pagination (findAll with no limit). Over time a power user could accumulate thousands of conversations, making this query increasingly slow and the response payload large. Add limit/offset or cursor-based pagination.


packages/shared/src/services/AskAiService.ts:88: The 1024-dimensional embedding vector is interpolated directly into the SQL string (embeddingLiteral) and appears 4 times in the query text. Each request sends a ~12 KB SQL string to Postgres that must be re-parsed every time. Use a parameterised binding ($1::vector) instead — the query plan can be cached, and you avoid building/transmitting the giant literal repeatedly.


packages/shared/src/services/AskAiService.ts:107: The full-text search CTEs call to_tsvector('english', text) on every row at query time — this is a sequential scan with per-row text parsing. On a table with several thousand chunks this will be slow (~seconds). The RAG sidecar should create a GIN index on to_tsvector('english', text) for both tables; without it, this query will degrade linearly with table size and dominate request latency.


packages/shared/src/services/AskAiService.ts:4: The SENSITIVE_KEY_PATTERN regex (/password|apikey|secret|token|databaseurl|connectionstring/i) is used to redact config before sending it to Anthropic's external API. This allowlist approach is fragile — it misses common sensitive key names like key (standalone), credential, cert, private, auth, signing, and any deployment-specific names. The full (redacted) server config is sent to a third-party API on every message. Consider inverting the approach: only include an explicit allowlist of config keys that are safe and useful for the AI, rather than trying to redact all sensitive ones from the full config.

Comment thread packages/central-server/app/askAi.js Outdated
Comment thread packages/shared/src/services/AskAiService.ts
Comment thread packages/web/app/components/AskAi/AskAiPanel.jsx
Comment thread packages/shared/src/services/AskAiService.ts Outdated
Comment thread packages/shared/src/services/AskAiService.ts
Comment thread packages/facility-server/app/routes/apiv1/askAi.js Outdated
Comment thread packages/web/app/App.jsx Outdated
Comment thread packages/shared/src/baml_src/baml_client/async_client.ts Outdated
Comment thread packages/facility-server/app/routes/apiv1/askAi.js Outdated
Comment thread packages/shared/src/services/AskAiService.ts Outdated
@review-hero
Copy link
Copy Markdown

review-hero Bot commented Apr 4, 2026

🦸 Review Hero Summary
12 agents reviewed this PR | 6 critical | 5 suggestions | 0 nitpicks | Filtering: consensus 3 voters, 7 below threshold

Below consensus threshold (7 unique issues not confirmed by majority)
Location Agent Severity Comment
packages/database/src/migrations/1772000000001-createAskAiTables.ts:52 BES Requirements suggestion The conversation_id foreign key on messages has no onDelete clause. When a conversation is destroyed (the DELETE endpoint calls conversation.destroy()), orphan message rows will remain in `...
packages/shared/src/services/AskAiService.ts:88 Bugs & Correctness critical SQL injection via embedding literal. The embedding vector from the Voyage AI API response is interpolated directly into the SQL string without sanitisation (${embeddingLiteral}). If the external ...
packages/shared/src/services/AskAiService.ts:172 Bugs & Correctness suggestion The conversation history query fetches messages ordered ASC with LIMIT 20, which returns the oldest 20 messages. For a long conversation, this drops the most recent context. It should fetch the...
packages/shared/src/services/AskAiService.ts:188 BES Requirements suggestion const assistantContent = response.clarifyingQuestion || response.answer; uses ||. Per project preferences: 'Prefer ?? over ||'. An empty string '' is a valid value for these fields (BAML ...
packages/shared/src/services/AskAiService.ts:210 Bugs & Correctness suggestion The user message and assistant response are saved in two separate create calls without a transaction. If the second create (assistant message) fails, the conversation will have the user message...
packages/web/app/components/AskAi/AskAiPanel.jsx:1 Design & Architecture suggestion This 400-line file mixes layout/styling (13 styled components), state management (6 useState hooks, conversation lifecycle), API orchestration, and rendering into a single component. Consider spl...
packages/web/app/components/AskAi/AskAiPanel.jsx:92 Bugs & Correctness suggestion No input validation on message content before sending to the API. An empty content field (after trimming) is prevented, but there's no length limit. A user could paste an extremely large message ...
Local fix prompt (copy to your coding agent)

Fix these issues identified on the pull request. One commit per issue fixed.


packages/central-server/app/askAi.js:26: All API endpoints must have permission checks per coding rules (req.ability.can(action, subject) + req.flagPermissionChecked()). Every route here calls req.flagPermissionChecked() without actually checking any permission — this is effectively a placeholder permission check, which the rules explicitly forbid: 'No TODO or placeholder permission checks — raise for discussion instead.' At minimum, check that the user has a relevant ability before flagging.


packages/shared/src/services/AskAiService.ts:195: process.env.ANTHROPIC_API_KEY = anthropicApiKey mutates a global shared object. In a concurrent Node.js server handling multiple requests, two simultaneous Ask AI requests could race on this assignment, causing one request to use the other's API key (or a stale value). Pass the key via BAML's env option instead (e.g. b.AskTamanu(..., { env: { ANTHROPIC_API_KEY: anthropicApiKey } })) to avoid the shared mutable state.


packages/web/app/components/AskAi/AskAiPanel.jsx:282: Multiple user-facing strings are hardcoded English instead of using TranslatedText as required by project conventions: "Ask a question about how Tamanu works.", "I don't have enough information…", "Something went wrong. Please try again.", "Note: This answer may need verification", "Thinking…", "Ask a question…", "Chat", "Sources:". These should all use <TranslatedText stringId="askAi.…" fallback="…" />.


packages/shared/src/services/AskAiService.ts:107: The namespace parameter is interpolated directly into SQL (FROM ${namespace}_code, FROM ${namespace}_docs) without parameterisation. Per coding rules: 'Parameterised queries only — never interpolate user input into SQL.' Even though this currently comes from config, it's an injection vector if the config value is ever derived from user input or external sources. Use a whitelist check or escape the identifier (e.g. Sequelize.Utils.quoteIdentifier).


packages/shared/src/services/AskAiService.ts:56: ragDbCache is a module-level Map that caches Sequelize connections but never evicts or closes them. If ragDatabaseUrl changes (e.g. config reload), stale connections remain open. More importantly, there is no connection health check — if the database restarts, the cached Sequelize instance may hold dead connections. Consider using Sequelize's pool configuration with validation, or at minimum adding a way to close/evict entries.


packages/shared/src/services/AskAiService.ts:117: The RRF (Reciprocal Rank Fusion) query is incorrect. ROW_NUMBER() is computed over the entire FULL OUTER JOIN result set, not over each source independently. When a row exists only in vector_search (f.* is NULL), ROW_NUMBER() OVER (ORDER BY f.score DESC) assigns an arbitrary rank based on NULLs, not the row's actual FTS rank. The RRF scores should be computed before the join — rank within each CTE separately (e.g. add ROW_NUMBER() OVER (ORDER BY score DESC) AS rank inside vector_search and fts_search), then join on file_path+text and compute 1.0/(60+v.rank) + 1.0/(60+f.rank) in the outer query.


packages/facility-server/app/routes/apiv1/askAi.js:92: req.settings[req.facilityId]?.getFrontEndSettings() — if req.facilityId is undefined or the facility key doesn't exist in req.settings, this silently resolves to undefined, and JSON.stringify(undefined, null, 2) returns undefined (not a string), which gets passed to the LLM as the literal string "undefined". This will confuse the AI. Add a fallback: JSON.stringify(await req.settings[req.facilityId]?.getFrontEndSettings() ?? {}, null, 2).


packages/web/app/App.jsx:99: The Ask AI FAB and panel are unconditionally rendered for every logged-in user regardless of whether the feature is enabled on the server. This means users see a chat button that returns 503s when askAi.enabled is false. Gate the FAB behind a feature flag or settings check (e.g. a lightweight /ask-ai/status endpoint or an existing settings mechanism) so the UI only appears when the backend supports it.


packages/shared/src/baml_src/baml_client/async_client.ts:1: The entire baml_client/ directory (~1,200 lines of generated code) is committed to the repo despite being in .gitignore and regenerated by baml:generate during build. Generated files should not be checked in — they bloat diffs, risk staleness, and create merge conflicts. Remove them from the commit and rely solely on the build step.


packages/facility-server/app/routes/apiv1/askAi.js:1: The facility server and central server route files are nearly identical (~127 lines each) — a textbook DRY violation. The only difference is how appSettings is accessed (req.settings.getFrontEndSettings() vs req.settings[req.facilityId]?.getFrontEndSettings()). Extract the shared route handlers into a factory in @tamanu/shared that accepts a getAppSettings(req) callback, and mount the result in both servers. This eliminates a whole duplicated file that will inevitably drift.


packages/shared/src/services/AskAiService.ts:4: The sanitiseConfigForAi blocklist (/password|apikey|secret|token|databaseurl|connectionstring/i) is sent to an external LLM (Anthropic). Blocklist-based redaction is fragile — keys like credentials, privateKey, passphrase, auth, certificate, signingKey, or custom config keys containing secrets will pass through unredacted. Consider an allowlist approach instead: only include config keys known to be safe for sharing with the LLM, rather than trying to enumerate all sensitive patterns.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 5, 2026

🍹 destroy on tamanu-on-k8s/bes/tamanu-on-k8s/feat-ask-ai-add-ask-ai-chatbot

Pulumi report
   Destroying (feat-ask-ai-add-ask-ai-chatbot)

View Live: https://app.pulumi.com/bes/tamanu-on-k8s/feat-ask-ai-add-ask-ai-chatbot/updates/12

Downloading plugin random-4.19.0: starting
Downloading plugin random-4.19.0: done
Installing plugin random-4.19.0: starting
@ Destroying....
Installing plugin random-4.19.0: done

-- kubernetes:batch/v1:Job facility-2-migrator deleting original (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute central-frontend deleting (0s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (0s) 
-  kubernetes:batch/v1:Job facility-2-migrator deleting (0s) 
-  kubernetes:postgresql.cnpg.io/v1:Cluster central-db deleting (0s) 
-  kubernetes:batch/v1:Job facility-1-migrator deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-2-frontend deleting (0s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (0s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (0s) 
-- kubernetes:batch/v1:Job central-migrator deleting original (0s) 
-  kubernetes:postgresql.cnpg.io/v1:Cluster facility-1-db deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-1-frontend deleting (0s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (0s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (0s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (0s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (0s) 
@ Destroying........
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (5s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (5s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (5s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (5s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (5s) Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (5s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job central-migrator deleting original (5s) Resource scheduled for deletion
@ Destroying....
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (5s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (5s) Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (5s) Resource scheduled for deletion
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute central-frontend deleting (5s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute central-frontend deleted (5s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (5s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (5s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (5s) Resource scheduled for deletion
-  kubernetes:postgresql.cnpg.io/v1:Cluster central-db deleting (5s) Resource scheduled for deletion
-  kubernetes:postgresql.cnpg.io/v1:Cluster facility-1-db deleting (5s) Resource scheduled for deletion
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-2-frontend deleting (5s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (0s) Resource scheduled for deletion
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-1-frontend deleting (5s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-2-migrator deleting original (6s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (6s) 
@ Destroying....
-- kubernetes:batch/v1:Job central-migrator deleting original (0s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (0s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job central-migrator deleting original (0s) Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job central-migrator deleting original (0s) Resource scheduled for deletion
@ Destroying.....
-  kubernetes:postgresql.cnpg.io/v1:Cluster central-db deleting (8s) 
-  kubernetes:postgresql.cnpg.io/v1:Cluster central-db deleted (8s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-1-frontend deleting (9s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-1-frontend deleted (9s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original 
-- kubernetes:batch/v1:Job central-migrator deleting original (0s) Resource scheduled for deletion
@ Destroying....
-- kubernetes:batch/v1:Job facility-2-migrator deleted original Resource scheduled for deletion
-- kubernetes:batch/v1:Job central-migrator deleting original (0s) Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job central-migrator deleting original (0s) Resource scheduled for deletion
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-2-frontend deleting (10s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-2-frontend deleted (10s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (0s) Resource scheduled for deletion
@ Destroying....
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (0s) Resource scheduled for deletion
@ Destroying....
-  kubernetes:postgresql.cnpg.io/v1:Cluster facility-1-db deleting (12s) 
-  kubernetes:postgresql.cnpg.io/v1:Cluster facility-1-db deleted (12s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (0s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-1-migrator deleting original (0s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (0.10s) 
@ Destroying....
-- kubernetes:batch/v1:Job facility-2-migrator deleted original 
-- kubernetes:batch/v1:Job central-migrator deleting original (3s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (3s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (3s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (3s) 
-  kubernetes:batch/v1:Job facility-2-migrator deleted (3s) 
-  kubernetes:batch/v1:Job central-migrator deleting (0s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (0.10s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (0.47s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original 
-- kubernetes:batch/v1:Job central-migrator deleted original 
-- kubernetes:batch/v1:Job central-migrator deleted original 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job facility-1-migrator deleted original Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-2-migrator deleted original 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (0.32s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (0.51s) 
@ Destroying....
-- kubernetes:batch/v1:Job facility-2-migrator deleted original 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (0.03s) 
-- kubernetes:batch/v1:Job central-migrator deleted original Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job central-migrator deleted original Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (0.51s) Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (0.51s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job central-migrator deleted original Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (0.03s) Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (0.03s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job central-migrator deleted original Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job central-migrator deleted original Resource scheduled for deletion
-- kubernetes:batch/v1:Job central-migrator deleted original Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (0.03s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (0.03s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job central-migrator deleted original 
-- kubernetes:batch/v1:Job central-migrator deleted original (0.42s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (0.51s) 
-  kubernetes:batch/v1:Job facility-1-migrator deleted (1s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (0.42s) Resource scheduled for deletion
-  kubernetes:postgresql.cnpg.io/v1:Cluster facility-2-db deleting (0s) 
@ Destroying....
-- kubernetes:batch/v1:Job facility-1-migrator deleted original 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (0.03s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (1s) 
-- kubernetes:batch/v1:Job central-migrator deleted original Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-1-migrator deleted original 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (0.45s) 
-- kubernetes:batch/v1:Job central-migrator deleted original Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (0.45s) Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (0.45s) Resource scheduled for deletion
@ Destroying....
-  kubernetes:postgresql.cnpg.io/v1:Cluster facility-2-db deleting (1s) Resource scheduled for deletion
-- kubernetes:batch/v1:Job central-migrator deleted original Resource scheduled for deletion
-- kubernetes:batch/v1:Job central-migrator deleted original Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job central-migrator deleted original Resource scheduled for deletion
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (1s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (2s) 
-- kubernetes:batch/v1:Job central-migrator deleted original Resource scheduled for deletion
@ Destroying....
-- kubernetes:batch/v1:Job central-migrator deleted original Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job central-migrator deleted original Resource scheduled for deletion
@ Destroying....
-- kubernetes:batch/v1:Job central-migrator deleted original 
-- kubernetes:batch/v1:Job central-migrator deleted original (1s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original Resource scheduled for deletion
@ Destroying....
-- kubernetes:batch/v1:Job facility-1-migrator deleted original Job Failed. failed: 6/1
-- kubernetes:batch/v1:Job facility-1-migrator deleted original Resource scheduled for deletion
@ Destroying....
-- kubernetes:batch/v1:Job facility-1-migrator deleted original 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (1s) 
@ Destroying.....
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (1s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (3s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (1s) 
-  kubernetes:batch/v1:Job central-migrator deleted (5s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (3s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (4s) 
@ Destroying....
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (2s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (9s) 
-  kubernetes:batch/v1:Job central-migrator deleted (5s) 
-  kubernetes:batch/v1:Job central-migrator deleted (5s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (6s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (6s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (9s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (9s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (9s) 
-- kubernetes:batch/v1:Job facility-2-migrator deleted original (9s) 
@ Destroying....
-- kubernetes:batch/v1:Job central-migrator deleted original (6s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (7s) 
-  kubernetes:postgresql.cnpg.io/v1:Cluster facility-2-db deleting (9s) 
-  kubernetes:postgresql.cnpg.io/v1:Cluster facility-2-db deleted (9s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (4s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (5s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (7s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (7s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (7s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (7s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (7s) 
-- kubernetes:batch/v1:Job central-migrator deleted original (7s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (5s) 
-- kubernetes:batch/v1:Job facility-1-migrator deleted original (6s) 
@ Destroying....
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy facility-2-traffic-policy deleting (0s) 
-  kubernetes:core/v1:Service facility-1-web deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute patient-portal-api-legacy deleting (0s) 
-  kubernetes:core/v1:Secret central-raw-db deleting (0s) 
-  kubernetes:core/v1:Secret central-reporting-db deleting (0s) 
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy facility-1-traffic-policy deleting (0s) 
-  kubernetes:core/v1:Secret facility-2-reporting-db deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-2-api-legacy deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-1-api deleting (0s) 
-  kubernetes:core/v1:Secret facility-1-reporting-db deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute patient-portal-api deleting (0s) 
-  kubernetes:apps/v1:Deployment facility-1-web deleting (0s) 
-  kubernetes:core/v1:Service central-web deleting (0s) 
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy central-traffic-policy deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute central-api-legacy deleting (0s) 
-  kubernetes:apps/v1:Deployment facility-2-web deleting (0s) 
@ Destroying....
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute central-api-legacy deleting (1s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute central-api-legacy deleted (1s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-1-api deleting (1s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-2-api-legacy deleting (1s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-1-api deleted (1s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-2-api-legacy deleted (1s) 
-  kubernetes:core/v1:Secret facility-1-raw-db deleting (0s) 
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy facility-2-traffic-policy deleting (1s) 
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy facility-2-traffic-policy deleted (1s) 
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy facility-1-traffic-policy deleting (1s) 
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy facility-1-traffic-policy deleted (1s) 
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy central-traffic-policy deleting (1s) 
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy central-traffic-policy deleted (1s) 
-  kubernetes:core/v1:Service facility-1-web deleting (1s) 
-  kubernetes:core/v1:Service facility-1-web deleted (1s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute patient-portal-api deleting (1s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute patient-portal-api deleted (1s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute patient-portal-api-legacy deleting (1s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute patient-portal-api-legacy deleted (1s) 
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy patient-portal-traffic-policy deleting (0s) 
-  kubernetes:core/v1:Service facility-2-web deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute patient-portal-frontend deleting (0s) 
-  kubernetes:apps/v1:Deployment central-web deleting (0s) 
-  kubernetes:core/v1:Service central-web deleting (1s) 
-  kubernetes:core/v1:Service central-web deleted (1s) 
@ Destroying....
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-2-api deleting (0s) 
-  kubernetes:core/v1:Secret facility-2-raw-db deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-1-api-legacy deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute central-api deleting (0s) 
-  kubernetes:core/v1:Secret facility-1-reporting-db deleting (2s) Resource scheduled for deletion
-  kubernetes:core/v1:Secret central-reporting-db deleting (2s) Resource scheduled for deletion
-  kubernetes:core/v1:Secret central-raw-db deleting (2s) Resource scheduled for deletion
@ Destroying....
-  kubernetes:core/v1:Secret facility-2-reporting-db deleting (2s) 
-  kubernetes:core/v1:Secret facility-2-reporting-db deleted (2s) 
-  kubernetes:apps/v1:Deployment facility-2-web deleting (2s) Resource scheduled for deletion
-  kubernetes:apps/v1:Deployment facility-1-web deleting (2s) Resource scheduled for deletion
-  kubernetes:core/v1:Secret facility-1-raw-db deleting (1s) Resource scheduled for deletion
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy patient-portal-traffic-policy deleting (1s) Resource scheduled for deletion
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute patient-portal-frontend deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:Service facility-2-web deleting (1s) Resource scheduled for deletion
-  kubernetes:apps/v1:Deployment central-web deleting (1s) Resource scheduled for deletion
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute central-api deleting (1s) Resource scheduled for deletion
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-2-api deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:Secret facility-2-raw-db deleting (1s) Resource scheduled for deletion
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-1-api-legacy deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:Secret central-raw-db deleting (3s) 
-  kubernetes:core/v1:Secret central-raw-db deleted (3s) 
-  kubernetes:core/v1:Secret facility-1-reporting-db deleting (3s) 
-  kubernetes:core/v1:Secret facility-1-reporting-db deleted (3s) 
-  kubernetes:core/v1:Secret central-reporting-db deleting (3s) 
-  kubernetes:core/v1:Secret central-reporting-db deleted (3s) 
@ Destroying....
-  kubernetes:core/v1:Secret facility-1-raw-db deleting (2s) 
-  kubernetes:core/v1:Secret facility-1-raw-db deleted (2s) 
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy patient-portal-traffic-policy deleting (2s) 
-  kubernetes:gateway.envoyproxy.io/v1alpha1:ClientTrafficPolicy patient-portal-traffic-policy deleted (2s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute patient-portal-frontend deleting (2s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute patient-portal-frontend deleted (2s) 
@ Destroying....
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute central-api deleting (3s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute central-api deleted (3s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-2-api deleting (3s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-2-api deleted (3s) 
-  kubernetes:core/v1:Secret facility-2-raw-db deleting (3s) 
-  kubernetes:core/v1:Secret facility-2-raw-db deleted (3s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-1-api-legacy deleting (3s) 
-  kubernetes:gateway.networking.k8s.io/v1:HTTPRoute facility-1-api-legacy deleted (3s) 
-  kubernetes:core/v1:Service facility-2-web deleting (3s) 
-  kubernetes:core/v1:Service facility-2-web deleted (3s) 
@ Destroying....
-  kubernetes:apps/v1:Deployment facility-2-web deleting (5s) 
-  kubernetes:apps/v1:Deployment facility-2-web deleted (5s) 
-  kubernetes:apps/v1:Deployment facility-1-web deleting (5s) 
-  kubernetes:apps/v1:Deployment facility-1-web deleted (5s) 
@ Destroying....
-  kubernetes:apps/v1:Deployment central-web deleting (5s) 
-  kubernetes:apps/v1:Deployment central-web deleted (5s) 
-  bes:tamanu:WebFrontend facility-2 deleting (0s) 
-  kubernetes:core/v1:Secret central-db-url deleting (0s) 
-  kubernetes:core/v1:Secret facility-2-db-url deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway central deleting (0s) 
-  kubernetes:core/v1:Secret facility-1-db-url deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway patient-portal deleting (0s) 
-  random:index:RandomPassword facility-2-raw-db deleting (0s) 
-  kubernetes:core/v1:Secret pullsecret-github deleting (0s) 
-  kubernetes:core/v1:Service facility-2-sync deleting (0s) 
-  kubernetes:core/v1:Service facility-2-api deleting (0s) 
-  kubernetes:core/v1:ConfigMap facility-1 deleting (0s) 
-  random:index:RandomPassword central-raw-db deleting (0s) 
-  kubernetes:core/v1:Service patient-portal-web deleting (0s) 
-  bes:tamanu:WebFrontend facility-1 deleting (0s) 
-  random:index:RandomPassword facility-2-reporting-db deleting (0s) 
-  kubernetes:apps/v1:Deployment patient-portal-web deleting (0s) 
@ Destroying....
-  kubernetes:gateway.networking.k8s.io/v1:Gateway facility-1 deleting (0s) 
-  bes:tamanu:WebFrontend central deleting (0s) 
-  random:index:RandomPassword central-raw-db deleted (0.58s) 
-  random:index:RandomPassword facility-2-reporting-db deleted (0.58s) 
-  random:index:RandomPassword facility-1-raw-db deleting (0s) 
-  kubernetes:core/v1:ConfigMap central deleting (0s) 
-  kubernetes:core/v1:Service patient-portal-web deleting (1s) 
-  kubernetes:core/v1:Service patient-portal-web deleted (1s) 
-  kubernetes:core/v1:Secret tupaia deleting (0s) 
-  kubernetes:core/v1:Service central-db-tailscale deleting (0s) 
-  random:index:RandomPassword facility-1-raw-db deleted (0.51s) 
-  random:index:RandomPassword facility-2-raw-db deleted (1s) 
@ Destroying....
-  kubernetes:core/v1:ConfigMap provisioning deleting (0s) 
-  random:index:RandomPassword central-reporting-db deleting (0s) 
-  kubernetes:core/v1:Service facility-2-api deleting (1s) Service is ready
-  kubernetes:core/v1:Service facility-2-api deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:Service facility-2-sync deleting (1s) Service is ready
-  kubernetes:core/v1:Service facility-2-sync deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:Secret pullsecret-github deleting (1s) Resource scheduled for deletion
-  kubernetes:apps/v1:Deployment patient-portal-web deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:ConfigMap facility-1 deleting (1s) 
-  kubernetes:core/v1:ConfigMap facility-1 deleted (1s) 
-  kubernetes:core/v1:Secret central-db-url deleting (1s) Resource is always ready
-  kubernetes:core/v1:Secret central-db-url deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:Secret facility-1-db-url deleting (1s) Resource is always ready
-  kubernetes:core/v1:Secret facility-1-db-url deleting (1s) Resource scheduled for deletion
-  kubernetes:gateway.networking.k8s.io/v1:Gateway patient-portal deleting (1s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway patient-portal deleted (1s) 
-  kubernetes:core/v1:Secret facility-2-db-url deleting (1s) Resource is always ready
-  kubernetes:core/v1:Secret facility-2-db-url deleting (1s) Resource scheduled for deletion
-  random:index:RandomPassword facility-1-reporting-db deleting (0s) 
-  random:index:RandomPassword central-reporting-db deleted (0.37s) 
-  kubernetes:core/v1:ConfigMap central deleting (1s) Resource is always ready
-  kubernetes:core/v1:ConfigMap central deleting (1s) Resource scheduled for deletion
-  kubernetes:gateway.networking.k8s.io/v1:Gateway central deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:Service facility-1-sync deleting (0s) 
-  kubernetes:core/v1:Secret mailgun deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway facility-1 deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:Service central-db-tailscale deleting (0s) Service is ready
-  kubernetes:core/v1:Service central-db-tailscale deleting (0s) Resource scheduled for deletion
-  kubernetes:core/v1:Secret tupaia deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:ConfigMap provisioning deleting (0s) Resource is always ready
-  kubernetes:core/v1:ConfigMap provisioning deleting (0s) Resource scheduled for deletion
-  random:index:RandomPassword facility-1-reporting-db deleted (0.27s) 
-  kubernetes:core/v1:Secret bugsnag deleting (0s) 
@ Destroying....
-  kubernetes:core/v1:Secret pullsecret-github deleting (2s) 
-  kubernetes:core/v1:Secret pullsecret-github deleted (2s) 
-  kubernetes:core/v1:Secret facility-1-db-url deleting (2s) 
-  kubernetes:core/v1:Secret central-db-url deleting (2s) 
-  kubernetes:core/v1:Secret facility-1-db-url deleted (2s) 
-  kubernetes:core/v1:Secret central-db-url deleted (2s) 
-  kubernetes:core/v1:Secret mailgun deleting (0s) Resource is always ready
-  kubernetes:core/v1:Secret mailgun deleting (0s) Resource scheduled for deletion
-  kubernetes:core/v1:Service facility-1-api deleting (0s) 
-  kubernetes:core/v1:Service facility-1-sync deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:Service central-api deleting (0s) 
-  kubernetes:core/v1:ConfigMap facility-2 deleting (0s) 
-  kubernetes:core/v1:Service facility-2-sync deleting (3s) 
-  kubernetes:core/v1:Service facility-2-sync deleted (3s) 
-  kubernetes:core/v1:Secret facility-2-db-url deleting (3s) 
-  kubernetes:core/v1:Secret facility-2-db-url deleted (3s) 
-  kubernetes:core/v1:Secret bugsnag deleting (0s) Resource is always ready
-  kubernetes:core/v1:Secret bugsnag deleting (0s) Resource scheduled for deletion
-  kubernetes:core/v1:Service facility-2-api deleting (3s) 
-  kubernetes:core/v1:Service facility-2-api deleted (3s) 
-  kubernetes:core/v1:ServiceAccount app-sa deleting (0s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway facility-2 deleting (0s) 
@ Destroying....
-  kubernetes:core/v1:ConfigMap central deleting (2s) 
-  kubernetes:core/v1:ConfigMap central deleted (2s) 
-  kubernetes:core/v1:ConfigMap provisioning deleting (2s) 
-  kubernetes:core/v1:ConfigMap provisioning deleted (2s) 
-  kubernetes:core/v1:Secret tupaia deleting (2s) 
-  kubernetes:core/v1:Secret tupaia deleted (2s) 
-  kubernetes:core/v1:Service facility-1-api deleting (0s) Service is ready
-  kubernetes:core/v1:Service facility-1-api deleting (0s) Resource scheduled for deletion
-  kubernetes:core/v1:Service central-api deleting (0s) Service is ready
-  kubernetes:core/v1:Service central-api deleting (0s) Resource scheduled for deletion
-  kubernetes:core/v1:ConfigMap facility-2 deleting (0s) Resource is always ready
-  kubernetes:core/v1:ConfigMap facility-2 deleting (0s) Resource scheduled for deletion
-  kubernetes:core/v1:Secret bugsnag deleting (1s) 
-  kubernetes:core/v1:Secret bugsnag deleted (1s) 
-  kubernetes:core/v1:Secret mailgun deleting (1s) 
-  kubernetes:core/v1:Secret mailgun deleted (1s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway facility-2 deleting (0s) Resource is current
-  kubernetes:gateway.networking.k8s.io/v1:Gateway facility-2 deleting (0s) Resource scheduled for deletion
@ Destroying....
-  kubernetes:core/v1:ConfigMap facility-2 deleting (1s) 
-  kubernetes:core/v1:ConfigMap facility-2 deleted (1s) 
-  kubernetes:core/v1:ServiceAccount app-sa deleting (1s) Resource scheduled for deletion
-  kubernetes:core/v1:Service facility-1-sync deleting (2s) 
-  kubernetes:core/v1:Service facility-1-sync deleted (2s) 
-  kubernetes:core/v1:ServiceAccount app-sa deleting (1s) 
-  kubernetes:core/v1:ServiceAccount app-sa deleted (1s) 
-  kubernetes:core/v1:Service facility-1-api deleting (2s) 
-  kubernetes:core/v1:Service facility-1-api deleted (2s) 
-  kubernetes:core/v1:Service central-api deleting (2s) 
-  kubernetes:core/v1:Service central-api deleted (2s) 
@ Destroying....
-  kubernetes:apps/v1:Deployment patient-portal-web deleting (5s) 
-  kubernetes:apps/v1:Deployment patient-portal-web deleted (5s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway central deleting (5s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway central deleted (5s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway facility-1 deleting (5s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway facility-1 deleted (5s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway facility-2 deleting (2s) 
-  kubernetes:gateway.networking.k8s.io/v1:Gateway facility-2 deleted (2s) 
@ Destroying....
-  kubernetes:core/v1:Service central-db-tailscale deleting (6s) 
-  kubernetes:core/v1:Service central-db-tailscale deleted (6s) 
@ Destroying....
-  bes:tamanu:FacilityServer 2 deleting (0s) 
-  bes:tamanu:FacilityServer 1 deleting (0s) 
-  bes:tamanu:WebFrontend patient-portal deleting (0s) 
-  bes:tamanu:CentralServer central deleting (0s) 
-  pulumi:pulumi:Stack tamanu-on-k8s-feat-ask-ai-add-ask-ai-chatbot deleting (0s) 
-  pulumi:pulumi:Stack tamanu-on-k8s-feat-ask-ai-add-ask-ai-chatbot deleted (0.16s) 
   pulumi:pulumi:Stack tamanu-on-k8s-feat-ask-ai-add-ask-ai-chatbot  
Resources:
   - 72 deleted

Duration: 41s

The resources in the stack have been deleted, but the history and configuration associated with the stack are still maintained. 
If you want to remove the stack completely, run `pulumi stack rm feat-ask-ai-add-ask-ai-chatbot`.
   

@julianam-w julianam-w changed the title feat(ask ai): no-issue: add Ask AI chatbot feat(askAi): no-issue: add Ask AI chatbot Apr 5, 2026
Comment thread packages/shared/src/services/AskAiService.ts Fixed
@julianam-w julianam-w requested a review from a team as a code owner April 5, 2026 10:28
@julianam-w julianam-w force-pushed the feat/ask-ai/add-ask-ai-chatbot branch 2 times, most recently from 26d07d8 to 68d52f9 Compare April 5, 2026 13:40
Comment thread packages/shared/src/services/AskAiService.ts
Comment thread packages/shared/src/services/AskAiService.ts Outdated
Comment thread packages/shared/src/services/askAiRouter.js Outdated
Comment thread packages/shared/src/services/askAiRouter.js Outdated
Comment thread packages/shared/src/services/AskAiService.ts Outdated
Comment thread packages/shared/src/services/AskAiService.ts
@review-hero
Copy link
Copy Markdown

review-hero Bot commented Apr 6, 2026

🦸 Review Hero Summary
12 agents reviewed this PR | 2 critical | 4 suggestions | 0 nitpicks | Filtering: consensus 3 voters, 10 below threshold

Below consensus threshold (10 unique issues not confirmed by majority)
Location Agent Severity Comment
packages/shared/src/services/askAiRouter.js:19 BES Requirements critical req.flagPermissionChecked() is called on the pre-auth /status route, but on the central server this route is mounted before authModule and without ensurePermissionCheck middleware (central ...
packages/shared/src/services/askAiRouter.js:46 BES Requirements suggestion All authenticated Ask AI endpoints call req.flagPermissionChecked() but none perform an actual permission check with req.ability.can(action, subject). Per coding rules: 'All API endpoints must ...
packages/shared/src/services/askAiRouter.js:101 Performance suggestion config.util.toObject() is called on every POST /messages request, serialising the entire config tree to JSON. This is not free — config objects can be large. Since the config doesn't change at ru...
packages/shared/src/services/askAiRouter.js:110 Security suggestion The content field from req.body is passed to chat() without any validation — no type check, no length limit, no null check. A caller could send an extremely large string (megabytes) that gets...
packages/shared/src/services/AskAiService.ts:57 Security suggestion sanitiseConfigForAi uses an allowlist which is good, but includes sync.host and metaServer.hosts — these expose internal network topology (hostnames/IPs of central servers and meta servers) t...
packages/shared/src/services/AskAiService.ts:148 BES Requirements suggestion The SQL queries reference tamanu_code and tamanu_docs without the rag. schema prefix. Per the architecture docs ('tables rag.tamanu_code and rag.tamanu_docs') and the migration that creat...
packages/shared/src/services/AskAiService.ts:155 BES Requirements critical The embedding vector is interpolated directly into the SQL string (${embeddingLiteral}) rather than using parameterised replacements. While the values are floats from the Voyage API, this violate...
packages/shared/src/services/AskAiService.ts:230 BES Requirements suggestion User messages are sent verbatim to external APIs (Voyage AI for embedding, Anthropic for LLM completion). In a healthcare system, users may paste or type patient-identifiable information into the c...
packages/shared/src/services/AskAiService.ts:237 Performance suggestion The conversation history query and the RAG search are independent of each other but run sequentially. Running them concurrently with Promise.all would reduce latency by overlapping the DB query w...
packages/web/app/components/AskAi/AskAiPanel.jsx:360 Security suggestion User messages are rendered through <Markdown>{msg.content}</Markdown>. While react-markdown v9 does not render raw HTML by default (safe), if rehype-raw is ever added as a plugin, this become...
Local fix prompt (copy to your coding agent)

Fix these issues identified on the pull request. One commit per issue fixed.


packages/shared/src/services/AskAiService.ts:155: The RRF (Reciprocal Rank Fusion) SQL is incorrect and expensive. The FULL OUTER JOIN between vector_search and fts_search joins on file_path AND text, but chunks from different searches with different text won't match, producing a cross-product-like blowup. The ROW_NUMBER() windows in the SELECT of the rrf CTE operate over the entire joined result, not per-source ranking, so the RRF scores are meaningless. This will also be slow — each CTE scans without indexes (full-text search uses to_tsvector inline rather than a stored GIN index). Consider using RANK() partitioned within each source CTE before joining, or simplify to a UNION ALL + dedup approach.


packages/shared/src/services/AskAiService.ts:154: The embedding vector (1024 floats) is interpolated into the SQL string via string concatenation (embeddingLiteral), producing a ~10KB literal repeated 4 times in the query text (~40KB per request). This is parsed from scratch by Postgres on every call and cannot benefit from prepared-statement caching. Consider using a parameterised query ($1::vector) so Postgres can reuse the plan.


packages/shared/src/services/askAiRouter.js:65: GET /conversations loads all conversations for a user with no pagination or limit. Over time a user could accumulate hundreds of conversations, making this query increasingly expensive and the response payload large. Add a limit (and offset or cursor) to this findAll call, e.g. limit: 50, and accept pagination params from the query string.


packages/shared/src/services/askAiRouter.js:87: GET /conversations/:id loads all messages for a conversation without a limit. Long conversations could grow unbounded. The chat() function already caps history at 20 messages for the LLM, but this endpoint returns everything. Add a limit or pagination to avoid large payloads for old/long conversations.


packages/shared/src/services/AskAiService.ts:173: The full-text search CTEs call to_tsvector('english', text) inline in both the WHERE and SELECT clauses without a GIN index. On a table with thousands of RAG chunks this means a sequential scan computing tsvectors for every row on every query. If github-repo-rag doesn't create a GIN index on these tables, consider adding one in the migration, or document that the sidecar must create it.


packages/shared/src/services/AskAiService.ts:99: The ragDbCache Map grows without bound — one Sequelize connection pool per unique ragDatabaseUrl seen, never closed or evicted. In practice the URL is likely stable, but if it ever varies (e.g. per-request config, connection string rotation) this leaks connection pools. Consider adding a size guard or a close() cleanup path.

julianam-w and others added 15 commits April 11, 2026 09:45
Adds a RAG-powered AI chatbot to Tamanu that allows clinical staff to ask
questions about how Tamanu works. Uses Voyage AI for embeddings, a PostgreSQL
RAG database for context retrieval, and Claude via BAML for answer generation.

- Database migrations and models for AskAiConversation and AskAiMessage
- AskAiService with hybrid vector + full-text RAG search (top 10 results)
- API routes on both facility and central servers under /ask-ai/conversations
- BAML schema and generated client for structured LLM responses
- Floating chat panel in the web sidebar footer (toggle on/off)
- Markdown rendering for AI responses via react-markdown
- Sources omitted for tamanu namespace (codebase paths not useful to end users)
- Prompt instructs AI to note facility vs central server availability
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Documents RAG database architecture, deployment sequence, manual
re-indexing, rollback procedure, and verification steps for the
Ask AI feature.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…tion or class'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
…tion or class'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
The baml_client/ directory is generated by `baml generate` during build
and is already listed in .gitignore. Removing it from the index to avoid
diff noise, staleness, and merge conflicts.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… and fix timezone fallback

- Extract Ask AI routes into a shared askAiRouter so central and facility
  servers use the same handler logic
- Add RAG hybrid search (vector + FTS) and conversation history to AskAiService
- Add useAskAiStatusQuery hook and wire Ask AI panel to status check in App
- Fix centralServerLogin to fall back to local config primaryTimeZone when
  central server returns null, preventing DateTimeProvider context error

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…tion or class'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds dbt model files for the new ask_ai.conversations and
ask_ai.messages tables, and removes stale available_facilities
columns from public schema model files.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
julianam-w and others added 19 commits April 11, 2026 09:46
…es dbt model

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…in procedures dbt model

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Match what npm ci + npm i produces in CI: convert dev to devOptional
for babel syntax plugins and v8 packages, and add peer flag to
OS-specific esbuild and rollup optional packages.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…branch

Local npm (Node.js 23) converted devOptional->dev for jest/babel/istanbul
packages. Realign with origin/main's flags which match what CI's npm ci
+ npm i produces (Node.js 20).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…nical format

devOptional and other flags must appear after integrity in each entry,
matching exactly the key ordering produced by npm ci + npm i on Node.js 20.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…10.8.0

Regenerated with npm ci + npm i using the same Node.js version as CI
(v20.19.4, npm v10.8.0) to ensure flags and key ordering match exactly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…untu.sh

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ining

The previous query had three bugs:
- FULL OUTER JOIN on (file_path, text) caused a near-Cartesian blowup
  because chunks from vector and FTS searches rarely share identical text
- ROW_NUMBER() windows were computed over the entire joined result, not
  per-source, making RRF scores meaningless
- FTS used inline to_tsvector (no GIN index)

Fix: rank within each source CTE first, then UNION ALL + GROUP BY to
dedup and sum RRF scores. Adds a comment flagging the inline tsvector
as a future optimisation target.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Interpolating 1024 floats into the SQL string produces ~40 KB of literal
text per request (the vector appears 4 times) that Postgres must re-parse
on every call. Using $embedding::vector and $query as bind parameters
sends the values out-of-band, allowing Postgres to cache the query plan.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds limit/offset pagination to avoid loading all conversations for a
user in a single query. Defaults to limit=50, max 100. count is now
the total matching rows from findAndCountAll so the client can
determine whether more pages exist.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…/:id

Long conversations load unbounded message history. Add limit/offset
pagination matching the conversations endpoint — defaults to 50,
max 100. Returns messageCount (total) alongside the page so the client
can detect whether more messages exist.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
github-repo-rag already creates {table}_fts_idx (GIN on
to_tsvector('english', text)) and uses CREATE TABLE IF NOT EXISTS with
upsert, so indexes survive re-indexing. No migration needed.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The cache grew without bound — one Sequelize pool per unique URL, never
closed. Add a MAX_RAG_DB_CACHE_SIZE guard: when full, close and evict the
oldest entry (Map insertion order) before adding a new one.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…tor value

Guard the Map iterator result before destructuring to satisfy TypeScript's
type check — entries().next().value is [string, Sequelize] | undefined.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@julianam-w julianam-w force-pushed the feat/ask-ai/add-ask-ai-chatbot branch from 06f47a8 to a0a2014 Compare April 11, 2026 00:23
…rial and add sequelize dep to shared

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 11, 2026

julianam-w and others added 8 commits April 11, 2026 18:30
….8.5 entries

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…json

npm ci requires explicit lock file entries for react@16.8.5 and
scheduler@0.13.6 used by central-server and facility-server, which
were lost during previous lock file regenerations on this branch.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ion in package-lock.json

central-server and facility-server directly depend on react@16.8.5 which
requires nested node_modules entries in the lock file for npm ci. These
entries were lost during previous lock file regenerations on this branch.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…hared in package-lock.json

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ring test context setup

BamlRuntime.fromFiles runs at module load time via globals.ts, which
caused createTestContext() to hang (exceeding 30s) in central-server
tests when buildRoutes imports askAiRoutes. Moving the import inside
chat() means the BAML runtime is only initialised on the first LLM
call, not during server/test startup.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ask-ai

The ask-ai LLM was receiving a sanitised copy of the merged server config
(including local.json5 overrides) on every chat request. Removing this
eliminates any risk of local configuration values being exposed via the
assistant, even through the existing allowlist.

The assistant retains access to front-end app settings (getFrontEndSettings),
which are the user-facing feature flags it needs to answer configuration
questions accurately.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants