Skip to content

[Bug]: Race condition in TaskCalendarSyncService debounce fetching stale metadata cache #1763

@martin-forge

Description

@martin-forge

Describe the bug

There is a race condition when a task is updated rapidly (e.g., via the MCP API, programmatic edit, or quick user UI interaction). The update correctly persists to the file frontmatter, but the subsequent Google Calendar sync fails to register the change if it executes too fast, often overwriting the calendar event with stale metadata or failing to update the remote event at all.

Steps to reproduce

  1. Programmatically update a task's frontmatter (e.g., changing scheduled from April 4th to April 6th) via TaskUpdateService.updateTask or an AI agent utilizing MCP.
  2. The property saves to the file successfully.
  3. The TaskCalendarSyncService.updateTaskInCalendar(task) service fires.
  4. The service begins its 500ms SYNC_DEBOUNCE_MS delay.
  5. Once the 500ms debounce expires, the service completely discards the explicit task payload that was passed to it, and instead triggers a delayed query against the Obsidian metadata cache: const freshTask = await this.plugin.cacheManager.getTaskInfo(taskPath);
  6. Because Obsidian's internal metadataCache re-indexing works asynchronously, it often takes longer than exactly 500ms to index the new file contents, thereby returning the stale "April 4th" task values.
  7. The plugin pushes the stale April 4th dates back into Google Calendar permanently.

Expected behaviour

The TaskCalendarSync debouncer should temporarily cache the explicit task state passed to it by the update service during the debounce delay so that it can confidently execute against the authoritative snapshot if the metadata cache hasn't successfully indexed the file yet.

Proposed fix

I have verified locally that resolving the debounce using the passed payload completely eliminates the race condition. I created a local this.pendingTasks = new Map<string, TaskInfo>() variable inside the service class. Inside updateTaskInCalendar, I immediately cache the explicit task property updates onto this Map, then inside the internal setTimeout, I retrieve and delete from the Map to provide the authoritative state to executeTaskUpdate. I only fall back to the cacheManager proxy if that pending task lookup returns null.

I've written a Jest test that simulates the async lag, which successfully captures the upstream failure and validates the explicit queue fix.

I will supply a Pull Request with this fix in the future.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions