A small but handy patch release adding a global search toggle for Miyo so you can search across everything you've indexed, not just your current vault!
- 🔍 Search everything in Miyo, not just your vault — A new "Search everything in Miyo" toggle (on by default) lets Miyo search across all indexed content instead of scoping results to your current vault folder. When you turn it off, searches are scoped to your vault and you'll see the folder identifier displayed so you know exactly what's being searched. (@wenzhengjiang)
More details in the changelog:
- #2353 feat(miyo): add toggle to search all indexed content @wenzhengjiang
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
A focused patch release with Miyo server compatibility fixes and a small debug table cleanup!
- 🔧 Miyo now uses your vault name — not the full path — Copilot sends your Obsidian vault folder name (instead of the absolute local path) when talking to Miyo, making it work correctly whether your Miyo server is local or remote. The outdated "Remote Vault Folder" setting has been removed since it's no longer needed. (@wenzhengjiang)
- 📁 Miyo API alignment: folder names and relative paths — Internal Miyo requests now use
folder_name(matching the updated server API) and send vault-relative file paths for document indexing, keeping Copilot in sync with the Miyo server protocol. (@wenzhengjiang) - 🛠️ Cleaner search debug table — The redundant row-index column has been removed from the search results debug log, and columns are now ordered more usefully: path, index type, modified time, score, explanation. (@logancyang)
More details in the changelog:
- #2342 Use vault name for Miyo requests @wenzhengjiang
- #2348 refactor(miyo): rename folderPath to folderName and fix parse-doc path @wenzhengjiang
- #2349 refactor(miyo): rename folder_path to folder_name @wenzhengjiang
- #2344 fix(debug): remove idx column from search results debug table @logancyang
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
A solid patch release focused on Miyo remote server improvements and a handy OpenRouter prompt caching toggle!
- 🧠 Miyo remote server support, polished — Several improvements land together for users running Miyo on a remote machine. You can now set a Remote Vault Folder path so Copilot sends the correct path to your remote server (instead of your local path). The settings UI is cleaner too: "Vault Name" is now "Remote Vault Path (Optional)" and "Custom Miyo Server URL" is now "Remote Miyo Server URL (Optional)", both defaulting to blank so local users see no change. An indicator under the Enable Miyo toggle shows you the effective vault path and whether it resolves as local or remote. (@wenzhengjiang)
- 📱 Miyo stays off on mobile without a remote server — On mobile, where local service discovery is unavailable, Miyo now quietly disables itself unless you've configured a Remote Miyo Server URL. No more silent failures! (@wenzhengjiang)
- 🔁 Miyo folder API refactor — Under the hood, Miyo integration now uses the new folder-based API, using your vault path as the folder root and translating paths back to vault-relative paths. Index refreshes now notify you that the folder index is refreshing in Miyo. (@wenzhengjiang)
- ⚙️ OpenRouter: per-model prompt caching toggle — If you use an OpenRouter endpoint that doesn't support
cache_controlheaders (like Zero Data Retention endpoints), you can now turn off prompt caching per model in the model edit dialog. Prompt caching stays on by default for everyone else. (@logancyang)
More details in the changelog:
- #2334 feat(miyo): add remote vault folder path override for remote servers @wenzhengjiang
- #2326 feat(miyo): rename vault name to remote vault path and align with server API @wenzhengjiang
- #2318 feat(openrouter): add per-model toggle for prompt caching @logancyang
- #2331 Refactor Miyo integration for folder API @wenzhengjiang
- #2328 feat(miyo): disable miyo on mobile without a remote server URL @wenzhengjiang
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
A packed patch release with Composer V2 editing, Azure provider unification, drag-to-insert wikilinks, Obsidian Bases support, LM Studio Responses API, and a wave of agent, search, and UI improvements!
- ✏️ Composer V2: smarter file editing — The new
editFiletool replacesreplaceInFileas the primary targeted-edit tool, bringing more reliable and precise in-file edits when Copilot modifies your notes. (@wenzhengjiang) - 🔗 Drag relevant notes into your editor — You can now drag notes and sources from the Copilot chat panel directly into any editor to insert wikilinks instantly. Great for building connections while researching! (@logancyang)
- ☁️ Azure OpenAI and Azure Foundry unified — Both Azure providers are now merged into a single, cleaner Azure provider. No more confusion about which Azure to use! (@logancyang)
- 🗂️ Obsidian Bases support — Copilot's agent now has a
base:createcommand and can read.baseactive notes, with a new read-onlyobsidianBasesCLI tool for querying your Bases. (@logancyang) - ⚡ LM Studio: Responses API with KV cache reuse — LM Studio models now use the Responses API for stateful KV cache reuse, giving you faster, more efficient conversations with local models. (@logancyang)
- 💡 Gemini Embedding 2 preview support — Gemini Embedding 2 preview is now available as an embedding model option. (@logancyang)
- 🤖 GitHub Copilot Chat supports tool calling — The GitHub Copilot Chat model now supports native tool calling, unlocking agent mode with it! (@Emt-lin)
- 🗺️ Automatic file renaming to match topic titles — When Copilot generates a topic title for a chat, the file is now automatically renamed to match. (@somethingSTRANGE)
- 💾 OpenRouter prompt caching — OpenRouter models now support
cache_controlfor prompt caching, saving tokens on repeated context. (@logancyang) - 🧠 Miyo improvements — Customizable vault name setting, remote backend mobile re-indexing support, and license auth header for Miyo requests. (@wenzhengjiang)
- 📅 CLI tool upgrades — Daily/random read tools, reasoning summaries, enhanced instructions from obsidian-skills reference, and daily note template workflow fixes for past/future dates. (@logancyang)
- 🛠️ Agent & search fixes — Improved inline citations, query deduplication, answer source priority, expanded search limits for time-range/tag queries, and removed
returnAllto prevent token spikes. (@logancyang) - 🎨 UI & UX polish — Quick Ask panel positioning overhaul, LaTeX rendering fix, Ollama numCtx config, "None" system prompt option, clickable citations, and more. (@Emt-lin, @logancyang)
- 🔧 Local model fixes — Stripped leaked special tokens from local model responses; agent tool paths now use
vault.readinstead ofcachedReadfor reliability. (@logancyang, @yu-zou) - 🎬 YouTube transcript fix — Both classic and modern YouTube transcript panel DOM structures are now supported. (@Emt-lin)
- 🔒 Tiktoken CDN timeout fix — Defense-in-depth overrides prevent tiktoken CDN timeouts in Plus mode. (@logancyang)
More details in the changelog:
- #2305 Composer V2: Replace replaceInFile with editFile as the primary targeted-edit tool @wenzhengjiang
- #2311 feat(miyo): add customizable vault name setting @wenzhengjiang
- #2306 feat(lm-studio): use Responses API for stateful KV cache reuse @logancyang
- #2303 feat(tools): add base:create command and .base active note support @logancyang
- #2265 feat: add obsidianBases CLI tool (read-only) @logancyang
- #2299 feat: add Gemini Embedding 2 preview model support @logancyang
- #2291 feat: unify Azure OpenAI and Azure Foundry into single Azure provider @logancyang
- #2288 feat: drag relevant notes and sources into editor to insert wikilinks @logancyang
- #2279 feat(openrouter): enable prompt caching via cache_control @logancyang
- #2242 feat: refactor GitHubCopilotChatModel to support tool calling @Emt-lin
- #2301 feat: enhance CLI tool instructions from obsidian-skills reference @logancyang
- #2181 Add Obsidian CLI daily/random read tools and reasoning summaries @logancyang
- #2312 feat: add PR pricing agent @logancyang
- #2260 Add license auth header to Miyo requests @wenzhengjiang
- #2313 fix(ui): show enabled models without API keys as disabled in dropdown @logancyang
- #2308 fix(search): allow remote backends to re-index on mobile when disableIndexOnMobile is enabled @wenzhengjiang
- #2307 refactor(tools): remove daily note append/prepend CLI commands @logancyang
- #2304 fix(tools): daily note template workflow for past/future dates @logancyang
- #2300 fix(agent): improve inline citations, query dedup, and answer source priority @logancyang
- #2295 fix: restore expanded search limits for time-range and tag queries @logancyang
- #2293 fix: add "None" option to system prompt dropdown in chat settings @logancyang
- #2287 fix: agent loop improvements, clickable citations, and UI fixes @logancyang
- #2286 fix: support both classic and modern YouTube transcript panel DOM structures @Emt-lin
- #2285 fix: strip leaked special tokens from local model responses @logancyang
- #2283 fix: defense-in-depth overrides to prevent tiktoken CDN timeout in Plus mode @logancyang
- #2278 fix: Quick Ask/Command panel positioning overhaul @Emt-lin
- #2276 fix: command UI improvements, LaTeX rendering, and Ollama numCtx config @Emt-lin
- #2274 fix(tools): replace cachedRead with vault.read in agent/tool paths @yu-zou
- #2273 fix: remove returnAll from agent-facing search tools to prevent token spikes @logancyang
- #2269 fix: pass timeRange to Miyo search path @logancyang
- #2240 fix(rename): Add automatic file renaming to match generated topic titles @somethingSTRANGE
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
A patch release with Gemini stability fixes, Miyo improvements, and mobile/UI polish.
- 🛠️ Gemini fixes — Fixed streaming crash and agent loop silently stopping mid-conversation. If Gemini was cutting out on you, this should fix it! (@logancyang)
- 🛠️ Fix: "Connection error" for Copilot Plus users — Resolved TLS certificate errors that caused connection failures on some systems. (@logancyang)
- 🧠 Miyo improvements — Custom server URL for remote setups, confirmation dialog before clearing index, and smoother enable flow. (@wenzhengjiang)
- 📱 Mobile & UI fixes — Floating layers close properly on mobile, tables render correctly in chat, and stale selected text no longer bleeds into follow-up messages. (@Emt-lin)
- ⚡ Infinite scroll in chat history — Chat history now loads progressively as you scroll. Much snappier for long histories! (@logancyang)
- 🔧 Misc fixes — System prompt reset, template syntax hints, Qwen 3.5 search compatibility, Ctrl+Enter shortcut, custom model button layout. (@logancyang)
- 📝 Minor: User-facing documentation added (@logancyang), improved web tabs test coverage (@somethingSTRANGE)
More details in the changelog:
- #2251 Add infinite scroll pagination to ChatHistoryPopover @logancyang
- #2229 Add custom Miyo server URL setting for remote deployments @wenzhengjiang
- #2211 Add confirmation dialog before clearing Miyo index @wenzhengjiang
- #2256 Add automated release workflow on PR merge @logancyang
- #2254 Add user-facing documentation @logancyang
- #2252 Rename docs to designdocs and nest todo folder @logancyang
- #2239 Improve test coverage for context webTabs parsing @somethingSTRANGE
- #2255 Fix: use safeFetch for Copilot Plus to bypass browser TLS errors @logancyang
- #2249 Fix: upgrade @langchain/google-genai to fix Gemini streaming crash @logancyang
- #2247 Fix: prevent silent agent loop termination with Gemini @logancyang
- #2246 Fix: allow resetting default system prompt to built-in @logancyang
- #2245 Fix: improve system prompt template syntax hints @logancyang
- #2243 Fix: normalize string booleans in localSearch schema for Qwen 3.5 @logancyang
- #2234 Fix: Add Custom Chat Model action button crowding @logancyang
- #2228 Fix: skip redundant eligibility check when enabling Miyo @wenzhengjiang
- #2226 Fix: chat panel table rendering and third-party plugin compatibility @Emt-lin
- #2223 Fix: close Radix portaled layers when mobile drawer hides @Emt-lin
- #2222 Fix: exclude non-recoverable segments from L2 promotion @Emt-lin
- #2220 Fix: restore Ctrl+Enter text-replacement shortcut in command modal @logancyang
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
A quick fix for Miyo on Windows, converted doc output, and CORS fixes.
- 🖥️ Cross-platform Miyo service discovery — Miyo now works on Windows (Local + Roaming AppData fallback) (@wenzhengjiang)
- 📂 Configurable output folder for converted docs — New setting "Store converted markdown at" lets you save PDF/doc conversions as
.mdfiles in your vault. Great for reviewing what Copilot parsed and let Miyo index them for search! Enable it under Document Processor (Copilot settings -> Plus). (@logancyang) - 🛠️ Fix: CORS errors on license & API calls — Resolved intermittent CORS failures by routing BrevilabsClient through Obsidian's native request layer. If you saw license validation errors, this should fix it. (@logancyang)
- 🩺 Fix: Miyo health check logging — Non-"ok" health statuses (e.g. "degraded") are now properly logged instead of failing silently, making troubleshooting much easier. (@wenzhengjiang)
- 📝 Minor: Fixed broken FAQ link in README (@somethingSTRANGE), updated Miyo messaging from "upcoming" to "our desktop app" (@logancyang)
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
Miyo - our new desktop app has landed!!! Self-Host Mode gets even more powerful! This release brings a powerful semantic search engine running on your desktop (Miyo), self-host web search & YouTube support, and major indexing stability improvements.
- 🧠 Miyo Integration — A brand new semantic index backend for smarter, faster search! (thanks to months of hard work by @wenzhengjiang 🔥🔥🔥)
- ✨ Backend + retriever integration: Miyo powers a next-gen retrieval pipeline for Copilot.
- 📄 Document parsing & PDF support: Miyo document parsing with built-in PDF parse-doc integration.
- 🔒 Privacy, dedup & architecture hardening: Comprehensive review findings addressed for a production-ready Miyo experience.
- 🌐 Self-Host Web Search & YouTube — Bring your own Firecrawl/Perplexity for web search and Supadata for YouTube processing, fully self-hosted! No more reliance on external services for these features. (@logancyang)
- 💡 gemini-3 preview models no longer throw "This model does not support images" for image inputs (@logancyang)
- 🥷 Support saving chat in hidden folders (@logancyang)
- 🎨 Improved badge UI: Close button moved to a clean left icon overlay on hover, addressing our user feedback. (@zeroliu)
- ⚡
streamUsageconfig for 3rd-party OpenAI-format providers — enables token usage tracking for compatible providers. (@wotan-allfather)
Some latest models are added to the builtin model list such as gemini 3.1 pro, don't forget to click "Refresh Built-in Models" to get them!
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
A patch release with search improvements and bug fixes.
- Improved vault search: Better tag matching with hierarchical support (e.g. searching
#projectalso matches#project/alpha) and a cleaner, faster search pipeline. (@loganyang) - New in-chat indexing progress: Indexing progress now shows as a card inside Copilot Chat with a progress bar and pause/resume/stop controls, instead of a popup notice. No more phantom re-indexing on mode switch. (@loganyang)
- #2176 Fix ENAMETOOLONG error when Composer creates files with long names @logancyang
- #2174 Fix insert/replace at cursor accidentally including agent reasoning blocks @logancyang
- #2173 Fix phantom re-indexing on mode switch @logancyang
- #2172 Fix search recall for tag queries and short terms @logancyang
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
The first version of Self-Host Mode is finally here! You can simply toggle it on at the bottom of Plus settings, and your reliance on the Copilot Plus backend is gone (Believer required)!
In the next iterations, self-host mode will let you configure your own web search and YouTube services, and integrate with our new standalone desktop app for more powerful features, stay tuned!
- 🚀 Autonomous Agent Evolution — The agent experience gets a major upgrade this release!
- ✨ New reasoning block: The new reasoning block replaces the old tool call banners for a cleaner and smoother UI in agent mode!
- 🔧 Native tool calling: We moved to native tool calling from the XML-based approach for a more reliable tool call experience. Nowadays more and more models support native tool calling, even local models!
- Brand new Editor "Quick Ask" Floating Panel! Select text in the editor and get an inline AI floating panel for quick questions — with persistent selection highlights so you never lose your place! (@wyh)
- Twitter/X thread processing: Mention a tweet thread URL in chat and Copilot will fetch the entire thread! (@loganyang)
- Modular context compaction architecture — a cleaner, more extensible design for how Copilot manages long contexts. (@loganyang)
- LM Studio and Ollama reasoning/thinking token support — thinking models in LM Studio and Ollama now display reasoning output properly. (@loganyang)
- Major search improvements: better recall with note-diverse top-K scoring, and a new "Build Index" button replacing the warning triangle in Relevant Notes for a clearer UX. (@loganyang)
👨💻 Known Limitations: Agent mode performance varies by model, recommended models: Gemini Pro/Flash (copilot-plus-flash), Claude 4.5+ models, GPT 5+ and mini, grok 4 and fast. Many OpenRouter open source models work too but the performance can vary a lot.
More details in the changelog:
- #2139 Add Editor "Quick Ask" Floating Panel with Persistent Selection Highlights @Emt-lin
- #2146 Address quick ask refinements @logancyang
- #2149 Agent UI/UX Improvements @logancyang
- #2123 Migrate to native tool call in Plus and Agent modes @logancyang
- #2159 Implement modular context compaction architecture @logancyang
- #2155 Miyo Integration Phase 1: abstract semantic index backend @wenzhengjiang
- #2161 Add twitter4llm support for Twitter/X URL processing @logancyang
- #2151 Add reasoning/thinking token support for LM Studio @logancyang
- #2141 Add PatternListEditor component for include/exclude settings @Emt-lin
- #2164 Audit context envelope, tag alignment, artifact dedup, and logging @logancyang
- #2166 Update builtin models to latest versions across all providers @logancyang
- #2167 Remove HyDE query rewriting from HybridRetriever @logancyang
- #2168 Replace warning triangle with Build Index button in Relevant Notes @logancyang
- #2147 Update Ollama support @logancyang
- Show Self-Host Mode section to all users with disabled toggle for non-lifetime @logancyang
- #2117 Fix: increase grep limit for larger vaults and unify chunking @logancyang
- #2137 Fix: prevent arrow keys from getting stuck in typeahead with no matches @ZeroLiu
- #2140 Fix: GitHub Copilot mobile CORS bypass and auth UX improvements @Emt-lin
- #2153 Fix LM Studio chat with only ending think tag @logancyang
- #2157 Fix: improve mobile keyboard/navbar CSS scoping and platform detection @Emt-lin
- #2160 Fix: remove tiktoken remote fetch from critical LLM path @logancyang
- #2165 Fix search recall with note-diverse top-K and chunk-aware scoring @logancyang
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
Our first release in 2026 has some long-awaited upgrades!
- Copilot can read web tabs in Obsidian now!! 🚀 With the new builtin YouTube and web clipper slash commands (use "generate default" button under the Commands settings tab), you can get beautiful clips with mindmap with just one prompt! 🤯
- We now have a new custom system prompt system where every system prompt is stored as a markdown file. You can add and switch your custom system prompt in the Advanced settings tab or just above the chat input via the new gear icon!
- As requested, we now have a new side-by-side diff view for composer edits! You can toggle between the inline diff view and side-by-side when a diff is displayed.
- New auto compact when the context attached is too long and overflows your model's context window. You can set the token threshold, default is 128k tokens. If you want it to be less aggressive, set it to 1M tokens.
- OpenRouter embedding models are supported! You can simply add them using the OpenRouter provider in the embedding model table.
There are a lot more upgrades, including a significant improvement in index-free search, better sorting of chat history and projects, composer auto-accept toggle in the chat input menu (the 3 dots), a new LLM provider "GitHub Copilot", etc. Huge shoutout to @Emt-lin for the significant contributions!
More details in the changelog:
- #2110 Add GitHub Copilot integration with improved robustness @Emt-lin
- #2113 Add streaming support for GitHub Copilot @Emt-lin
- #1969 Add comprehensive system prompt management system @Emt-lin
- #2098 Enhance Model Settings with Local Services and Curl Command Support @Emt-lin
- #2096 Add Web Viewer bridge for referencing open web tabs in chat @Emt-lin
- #2112 Support OpenRouter embeddings @logancyang
- #2106 Implement compaction with adjustable threshold and loading messages @logancyang
- #2108 Simplify diff views to side-by-side and split modes with word-level highlighting @wenzhengjiang
- #2087 Add file status and think block state indicators @Emt-lin
- #2077 Add recent usage sorting for chat history and project list @Emt-lin
- #2076 Add auto-accept edits toggle in chat control setting @wenzhengjiang
- #2003 Refactor model API key handling and improve model filtering @Emt-lin
- #2073 Bring back toggle for inline citation @logancyang
- #2081 Update ApiKeyDialog layout for better visibility @Pleasurecruise
- #2115 Adjust settings @logancyang
- #2114 Fix default indicator and slash command @Emt-lin
- #2109 Fix dependencies @logancyang
- #2099 Always process think blocks regardless of current model selection @Emt-lin
- #2100 Fix view-content padding for different display modes @Emt-lin
- #2101 Fix search v3 ranking @logancyang
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
It's our 100th release!! 🚀 This release includes
- Fixed a critical bug that makes the UI laggy when there's a long conversation
- A major Relevant Notes algorithm improvement
- Big step toward self-host mode by deprecating several modules and moving forward
More details in the changelog:
- #2073 Bring back toggle for inline citation @logancyang
- #2071 Clean up dead code and update readme for privacy disclosure @logancyang
- #2070 Enhance error handling in BaseChainRunner @logancyang
- #2069 Deprecate IntentAnalyzer @logancyang
- #2063 Improve new user onboarding by removing notice on missing api key @logancyang
- #2052 Improve relevant note search algorithm @zeroliu
- #2049 Add path to variable_note format and reorder elements @wenzhengjiang
- #2072 Prevent orphaned spinners in agent @logancyang
- #2038 Revert "Improve onboarding by removing the popups … #2015" @logancyang
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Please report any issue you see in the member channel!
This release includes
- Significant enhancements to AWS Bedrock support
- A new automatic text selection to chat context feature (default to off under Basic setting)
- Better user experience with composer - skip confirmation with an explicit instruction
- Reduced popups during onboarding
More details in the changelog:
- #2023 Enable agent by default @logancyang
- #2018 Add auto selection to context setting @logancyang
- #2017 Implement auto context inclusion on text selection @logancyang
- #2015 Improve onboarding by removing the popups @logancyang
- #2011 Update bedrock model support @logancyang
- #2008 Add anthropic version required field for bedrock @logancyang
- #2010 Multiple UX improvement @zeroliu
- #2002 Enhance writeToFile tool with confirmation option @wenzhengjiang
- #2014 Update log file @logancyang
- #2007 Add AWS Bedrock cross-region inference profile guidance @vedmichv
- #2016 Fix thinking model verification @logancyang
- #2024 Do not show thinking if reasoning is not checked @logancyang
- #2012 Fix bedrock model image support @logancyang
- #2001 Fix template note processing @zeroliu
Release time again 🎉 We are ramping up to reach our big goals sooner! Some major changes
- 🫳 Drag-n-drop files from file navbar to Copilot Chat as context!
- 🧠 Revamped context management system that saves tokens by maximizing token cache hit
- 📂 Better context note loading from saved chats
- ↩️ New setting under Basic tab to set the send key - Enter / Shift + Enter
- 🔗 Embedded note
![[note]]now supported in context
More details in the changelog:
- #1996 Support Tasks codeblock in AI response @logancyang
- #1995 Support embedded note in context @logancyang
- #1988 Update Corpus-in-Context and web search tool guide @logancyang
- #1979 Add SiliconFlow support for chat and embedding models @qychen2001
- #1982 Simplify log file @logancyang
- #1968 Add configurable send shortcut for chat messages @Emt-lin
- #1973 Integrate ProjectChainRunner and ChatManager with new layered context @logancyang
- #1971 Context revamp - Introduces layered context handling @logancyang
- #1964 Support drag-n-drop files from file navbar @zeroliu
- #1962 Prompt Improvement: Use getFileTree to explore ambiguous notes and folders @wenzhengjiang
- #1963 Stop condensing history in plus nonagent route @logancyang
- #1997 Enhance local search guidance prompt @logancyang
- #1994 Fixes rendering issues in saved chat notes when model names contain special characters @logancyang
- #1992 Fix HyDE calling the wrong model @logancyang
- #1976 Fix ENAMETOOLONG @logancyang
- #1975 Fix indexing complete UI hanging @logancyang
- #1977 Fix thinking block duplication text for openrouter thinking models @logancyang
- #1987 Focus on click copilot chat icon in left ribbon @logancyang
- #1986 Focus to chat input on opening chat window command @logancyang
This patch release 3.1.1 packs a punch 💪 with some significant upgrades and critical bug fixes.
- OpenRouter thinking models are supported now! As long as "Reasoning" is checked for a reasoning model from OpenRouter, the thinking block will render in chat. If you don't want to see it, simply uncheck "Reasoning" to hide it.
- Copilot can see Dataview results in the active note! 🔥🔥🔥 Simply add the active note with dataview queries to context, and the LLM will see the executed results of those queries and use them as context!
- New model provider Amazon Bedrock added! (We only support API key and region settings for now, other ways of Bedrock access are not supported)
More details in the changelog:
- #1955 Add bedrock provider @logancyang
- #1954 Enable Openrouter thinking tokens @logancyang
- #1942 Improve custom command @zeroliu
- #1931 Improve error handling architecture across chain runners @Emt-lin
- #1929 Add CRUD to Saved Memory @wenzhengjiang
- #1928 Enhance canvas creation spec with with JSON Canvas Spec @wenzhengjiang
- #1923 Turn autosaveChat ON by default @wenzhengjiang
- #1922 Sort notes in typeahead menu by creation time @zeroliu
- #1919 Implement tag list builtin tool @logancyang
- #1918 Support dataview result in active note @logancyang
- #1914 Turn on memory feature by default @wenzhengjiang
- #1957 Fix ENAMETOOLONG error on chat save @logancyang
- #1956 Enhance error handling @logancyang
- #1950 Fix new note (renamed) not discoverable in Copilot chat @logancyang
- #1947 Stop rendering dataview result in AI response @logancyang
- #1927 Properly render pills in custom command @zeroliu
3.1.0 finally comes out of preview!! 🎉🎉🎉 This release introduces significant advancements in chat functionality and memory management, alongside various improvements and bug fixes.
- Brand New Copilot Chat Input: A completely redesigned chat input! This is a huge update we introduced after referencing all the industry-leading solutions.
- Enhanced Context Referencing: A new typeahead system allows direct referencing of notes, folders, tags, URLs, and tools using familiar syntax like
@,[[,#, and/. - Interactive "Pills": Referenced items appear as interactive pills for a cleaner interface and easier management. No tripping over typos again!
- Enhanced Context Referencing: A new typeahead system allows direct referencing of notes, folders, tags, URLs, and tools using familiar syntax like
- Long-Term Memory (plus): A major roadmap item, this feature allows Copilot to reference recent conversations and save relevant information to long-term memory. Memories are saved as
.mdfiles in thecopilot/memorydirectory by default (configurable), allowing for inspection and manual updates.- Major item on the roadmap, making its debut
- Enable "Reference Recent Conversation" and "Reference Saved Memory" in Plus settings
- AI can see a summary of recent chats
- AI can save and reference relevant info to long-term memory on its own
- Option to manually trigger save by asking the agent or using the new
@memorytool - Memories saved as md files under copilot/memory by default
- Users can inspect or update memories as they like
- Note Read Tool (plus agent mode): A new built-in agentic tool that can read linked notes when necessary.
- Token Counter: Displays the number of tokens in the current chat session's context window, resetting with each new chat.
- Max-Token Limit Warning: Alerts users when AI output is cutoff due to low token limits in user setting.
- YouTube Transcript Automation (plus): YouTube transcripts are now fetched automatically when a YouTube URL is entered in the chat input. A new command,
Copilot: Download YouTube Transcript, is available for raw transcript retrieval. - Projects Mode Enhancements (plus): Includes a new Chat History Picker and an enhanced progress bar.
- Backend & Tooling:
- Optimized agentic tool calls for smoother operation
- Migration of backend model services.
- Better search coverage when Semantic Search toggle is on.
- Better agent debugging infra
- The
@pomodoroand@youtubetools have been removed from the tool picker. - (plus) Sentence and word autocomplete features are temporarily disabled due to unstable performance, with plans to reintroduce them with user-customizable options.
- Fix random blank screen on Copilot Chat UI
- Addressed issues with extracting response text, mobile typeahead menu size, chat crashes, tool call UI freezes, and chat saving.
- Fixed illegal saved chat file names and improved image passing with
copilot-plus-flash. - Avoided unnecessary index rebuilds upon semantic search toggle changes.
- Ensured autonomous agent workflows use consistent tool call IDs and helper orchestration.
- Resolved issues with dropdown colors, badge borders, search result numbers, folder context, and spaces in typeahead triggers.
- Fix model addition in "Set Keys" window. "Verification" no longer required
- Fix verification of certain Claude models (was complaining about top p -1 before, now it works)
- If models are missing, navigate to Copilot settings -> Models tab and click "Refresh Built-in Models".
- Users are encouraged to report any issues in the pre-release channel.
This release has some big changes despite being a patch version. Notable changes:
- Introducing Inline Citations! Now any vault search response has inline citations and a collapsible sources section below the AI response. You have the option to toggle it off in QA settings. (This feature is experimental, if it's not working please report back!)
- Implement Log File, now you can share Copilot Log in the Advanced Setting, no more dev console!
- Removed User / Bot icons to save space in the Copilot Chat UI
- Add OpenRouter GPT 4.1 models and grok-4-fast to Projects mode
- Now AI-generated title for saved chats is optional, it's a toggle in the Basic setting
- Add new default
copilot/parent folder for saved conversations and custom prompts - Embedding model picker is no longer hidden under QA settings tab
Detailed changelog:
- #1838 Update sources styling @logancyang
- #1837 Drop user and bot icons to save space and add shade to user message @logancyang
- #1813 Add mobile-responsive components for settings @Emt-lin
- #1832 Add OpenRouter GPT-4.1 models to projects mode @logancyang
- #1831 Refactor active note inclusion and index event handling to respect setting @logancyang
- #1821 Implement inline citation @logancyang
- #1829 Agent Mode: Map copilot
@commandto builtin agent tools @wenzhengjiang - #1817 Conditionally initialize VectorStoreManager @logancyang
- #1816 Ensure nested folder paths exist when enhancing folder management @logancyang
- #1811 Make AI chat title optional @logancyang
- #1810 Move context menu and markdown image handling settings @logancyang
- #1809 Show embedding model @logancyang
- #1805 Add search explanation table in log @logancyang
- #1804 Implement log file @logancyang
- #1788 Only scroll to bottom when user messages are added @zeroliu
- #1840 Adjust vertical positioning in ModelTable component @logancyang
- #1830 Ensure proper QA exclusion on copilot data folders @logancyang
- #1827 Fix chat crash issue @zeroliu
- #1796 Support creating new folders in composer tools @wenzhengjiang
- #1795 Add safe area bottom padding to view content @Emt-lin
- #1793 Fix mobile embedded image passing @logancyang
- #1787 Improve loading state management in project context updates @Emt-lin
- #1786 Optimize modal height and close button display on mobile @Emt-lin
- #1778 Improve regex for composer codeblock @wenzhengjiang
- #1775 Switch to the new file when creating files with composer tools. @wenzhengjiang
- #1776 Fix url processing with image false triggers @logancyang
- #1770 Fix chat input responsiveness @zeroliu
- #1773 Fix canvas parsing in writeToFile tool @wenzhengjiang
- Fix a critical bug that stopped
[[note]]reference from working in the free chat mode after introducing the context menu in v3. - Optimize the replace writer tool
- Add a MSeeP security badge
We are thrilled to announce the official release of Copilot for Obsidian v3.0.0! After months of hard work, this major update brings a new era of intelligent assistance to your Obsidian vault, focusing on enhanced AI capabilities, a new search system, and significant user experience improvements.
Image support and the chat context menu are available for free users now! As long as your model supports vision, you can check the vision box and send image(s) to it.
We've completely reimagined how Copilot finds notes in your vault, making the search feature significantly more intelligent, robust, and efficient.
- Smart Index-Free Search: Search now works out-of-the-box without requiring an index build, eliminating index corruption issues.
- Enhanced Relevance: Copilot leverages keywords from titles, headings, tags, note properties, Obsidian links, co-citations, and parent folders to find relevant notes.
- Optional Semantic Engine: For semantic understanding, you can enable Semantic Search under QA settings, which uses an embedding index same as before.
- Memory Efficient: Uses minimal RAM, you can tune it under QA settings.
- Privacy First: The search infrastructure remains local; no data leaves your device unless you use an online model provider.
- New QA Settings:
- The embedding model is moved here from the Basic tab.
- Lexical Search RAM Limit: Control RAM usage for index-free search, allowing optimization for performance or memory constraints.
Transform your inline editing workflow with the brand new "Copilot: trigger quick command." This feature replaces the legacy "apply adhoc custom prompt" and allows you to insert quick prompts to edit selected blocks inline, integrating seamlessly with your custom command workflow. Assigning it to a hotkey like Cmd (Ctrl) + K is highly recommended!
Experience a new level of AI interaction with the Autonomous Agent. When enabled in Plus settings, your Copilot can now automatically trigger tool calls based on your queries, eliminating the need for explicit @tool commands.
- Intelligent Tool Calling: The agent can automatically use tools like vault search, web search, composer and YouTube processing to fulfill your requests.
- Tool Call Banner: See exactly which tools the agent used and their results with expandable banners.
- Configurable Tools: Gain fine-grained control by enabling or disabling specific tools that the agent can call (Local vault search, Web search, Composer operations, YouTube processing) in the Plus settings.
- Max Iterations Control: Adjust the agent's reasoning depth (4-8 iterations) for more complex queries.
- Supported Models: Optimized for
copilot-plus-flash(Gemini 2.5 models), Claude 4, GPT-4.1, GPT-4.1-mini, and now GPT-5 models. (Note: Agent mode performs best with Gemini models, followed by Claude and GPT. (Performance can vary a lot if you choose other models) - Control Remains Yours: For more control, turn the agent toggle off. vault search and web search are conveniently available as toggle buttons below the chat input.
- Tool Execution Banner: Visual feedback when the agent uses tools.
- Better Tool Visibility: Tool toggle buttons in chat input when the agent is off (vault search, web search, composer).
- Improved Settings UI: Dedicated "Agent Accessible Tools" section with clear framing.
- ChatGPT-like Auto-Scroll: Chat messages now auto-scroll when a new user message is posted.
- Image Support: Improved embedded image reading, no longer requiring "absolute path" setting for same-title disambiguation. Supports markdown-style embedded image links
. - AI Message Regeneration: Fixed issues with AI message regeneration.
- Tool Result Formatting: Enhanced formatting for tool results.
- UI Responsiveness: Better UI responsiveness during tool execution.
- Context Menu: Moved context menu items to a dedicated "Copilot" submenu.
- Model Parameters: Top P, frequency penalty, verbosity, and reasoning effort model parameters are now optional and can be toggled manually.
- Project Mode Context UI: A new progress bar indicates when project context is loading, with status visible via the context status icon.
- Embedding Models: Gemini embedding 001 is added as a built-in embedding model. The embedding model picker is now under the QA tab.
- OpenRouter: Now the top provider in settings.
Huge thanks to all our contributors and users, Copilot for Obsidian is nothing without its community! Please provide feedback if you encounter any issues.
Adding GPT-5 series models as built-in models, fresh out of the oven! Supports the new parameters reasoning_effort and verbosity. To see them, you may have to click "Refresh Builtin Models" under your chat model table in Copilot settings.
You can also add openrouter GPT-5 models such as openai/gpt-5-chat as a Custom Model with the OpenRouter provider.
This is an unscheduled release to add GPT-5. Copilot v3 is under construction and will be released officially very soon, stay tuned!
Yet another quick release fixing a few bugs: fix composer canvas codeblock, update copilot-plus-small (it hasn't been stable recently, should be stable now after a complete reindex)
- #1621 Exclude copilot folders from indexing by default @logancyang
- #1620 Disallow file types in context @logancyang
- #1619 Fix copilot-plus-small @logancyang
- #1617 Fix composer canvas codeblock @wenzhengjiang
- If you find models missing in any model table or dropdown, go to Copilot settings -> Models tab, find "Refresh Built-in Models" and click it. If it doesn't help, please report back!
- For
@Believerand@poweruserwho are on a preview version, now you can use BRAT to install official versions as well!
Another quick one fixing a default model reset issue introduced in v2.9.2.
Fixed a / command mistrigger issue, it now requires a preceding space to trigger.
Added rate limit to our Projects mode file conversion due to heavy load (some users have been passing 10k-100k pages of pdfs repeatedly), right now the limit is set to (50 or 100MB of non-markdown docs) per 3 hours per license key.
- #1603 Add Projects rate limit UI change @logancyang
- #1602 Update file upload guidelines and rate limit information @logancyang
- #1600 Fix slash trigger @logancyang
- #1599 Fix default model reset @logancyang
- If you find models missing in any model table or dropdown, go to Copilot settings -> Models tab, find "Refresh Built-in Models" and click it. If it doesn't help, please report back!
A quick patch on top of v2.9.1. Now you don't need to manually @youtube to get the transcript, simply include the youtube url(s) in your chat message and their transcripts will be available in the context. (@youtube <url> for the transcript still works). Another critical fix is for free users - no more license key check popup if you happen to have autocomplete on.
Small UX improvement from our community contributor: improved message editing; autosave on current chat at every message to avoid loss of data in case of an app crash.
Added (free) to free modes.
- #1594 Implement auto youtube tool @logancyang
- #1589 Improved message editing UX by adding Escape key cancellation and removing auto-save on blur @Mathieu2301
- #1593 Fix auto index trigger @logancyang
- #1592 Disable autocomplete by default and prevent license key popup for free user @logancyang
- If you find models missing in any model table or dropdown, go to Copilot settings -> Models tab, find "Refresh Built-in Models" and click it. If it doesn't help, please report back!
- For
@Believerand@poweruserwho are on a preview version, please backup your current<vault>/.obsidian/plugins/copilot/data.json, reinstall the plugin and copy the data.json back to safely migrate to this update
One big change in this release is the migration of Copilot custom commands, they are now saved as notes, same as custom prompts. We are unifying both into one system. Now you can edit them in Copilot settings under the Commands tab, or directly in the note, to enable them in the right-click menu or via / slash commands in chat. Please let us know if you have any issues with this migration!
- OpenRouter Gemini 2.5 models added as builtin models, available in Projects mode as well! (Please click "Refresh Builtin Models" under the model table if you don't see them)
- Every model is configurable with its own parameters such as temperature, max tokens, top P, frequency penalty. Global params are removed to avoid confusion.
- Projects mode now has a new context UI! It's much easier to set and check the files under a project now!
- Introduced a new Copilot command "Add Selection to Chat Context" that adds the selected text to the chat context menu in Copilot Chat. It's also available in the right-click menu. (If you are familiar with Cursor, you can also assign this command with
cmd + shift + Lshortcut) - Files such as PDFs and EPUBs that are converted to markdown in Projects mode are cached as markdown now, find them under
<vault>/.copilot/file-content-cache/. (Moving them out into the vault makes them indexable by Copilot, but keep in mind it may blow up your index size!) - Slash command
/can be triggered anywhere in the chat input now (used to only trigger when input is empty), even mid-text! - Various bug fixes.
- #1584 Enable model params for copilot-plus-flash @logancyang
- #1580 Update max token default description in setting page @wenzhengjiang
- #1576 Add support for selected text context in chat component @logancyang
- #1575 Implement slash command detection and replacement in ChatInput @logancyang
- #1572 Update file cache to use markdown instead of json @logancyang
- #1571 Update ChatModels and add new OpenRouter models @logancyang
- #1570 Update dependencies and enhance project context modal @logancyang
- #1566 Enhance abort signal in chains @logancyang
- #1562 Support editing all parameters individually for each model @Emt-lin
- #1551 Support project context preview @Emt-lin
- #1549 Merge custom command with custom prompts @zeroliu
- #1581 Composer: fix compose block for empty note @wenzhengjiang
- #1568 Fix word completion triggers @logancyang
- #1560 Remove think tag for insert into note @logancyang
- #1552 Fix: Custom model verification, api key errors @Emt-lin
- v2.9.1 has a custom commands migration, please find those custom commands that failed the migration in your under an "unsupported" subfolder in your custom prompt folder. Please review the reason it failed and update properly to keep them supported.
- If you find models missing in any model table or dropdown, go to Copilot settings -> Models tab, find "Refresh Built-in Models" and click it. If it doesn't help, please report back!
- For
@Believerand@poweruserwho are on a preview version, please backup your current<vault>/.obsidian/plugins/copilot/data.json, reinstall the plugin and copy the data.json back to safely migrate to this update
Massive update to Copilot Plus!!🔥🔥🔥
Announcing our "3 milestones" (previously in believer-exclusive preview) in the brand new v2.9.0:
A new Plus mode where you can define a combo of your custom instruction, model, parameters and context as individual workspaces, powered by models with a 1M-token context window and context caching.
This is different from @vault, you can ask much more abstract questions here such as "find common patterns/most important insights" Supports 20+ file types including PDF, EPUB, PPTX, DOCX, CSV, and many more.
(Since it's still in Alpha, the models still require your own API key, so keep an eye on your model provider's dashboard to avoid a surprise bill! The context processing is on us by our servers, we process those papers and books for you to have them ready for AI consumption.)
Edit or create notes by just chatting with Copilot. Trigger it by explicitly including @composer in your message. The AI will suggest an edit, you click Preview/Apply, and a diff view shows up for you to accept the edits by line or in bulk.
Composer supports canvas, too!
Suggests the next words based on the content in your vault (toggle Allow Additional Context in Plus mode to allow more relevant context in your vault), supports most languages
- Sentence completion: suggests possible next words
- Word completion: completes partial words based on existing words in your vault
You can toggle them on or off separately, e.g. have only word completion if you find sentence completion distracting. New Plus tab in Copilot settings
- Implement chat history picker button, render Save Chat as Note conditionally when Autosave is off
- Toggle to always include current file in the context by default (Plus setting tab)
- Autocomplete settings, customizable key binding
- A new Refresh Built-in Models button below the Models table
- Claude 4 and 3.7 sonnet thinking tokens support
- Add "Force rebuild index" to the 3-dots menu at the top right of the chat input
- "Save Chat as Note" does not open the saved note automatically anymore, as requested by users
- New Chat is now a copilot command assignable with a hotkey
- Quick add for models in the API key setting page, now it grabs the list of all available models from provider for you to pick from.
- Custom Prompts Sort Strategy in Advanced settings
If you find models missing in any model table or dropdown, go to Copilot settings -> Models tab, find "Refresh Built-in Models" and click it. If it doesn't help, please report back!
This is a joint effort by the Copilot team: @wenzhengjiang @zeroliu @Emt-lin @logancyang. It's impossible to achieve without the support and awesome feedback from our great community. We have a lot more upgrades coming in our pipeline, with some massive changes to the free features as well. Please stay tuned!
GPT 4.1 models and o4-mini are supported, and xAI is added as a provider! Another big update is canvas support! You can add canvas in your context by either a direct reference [[]] or the + button in your chat context menu! Copilot can even understand the group structure!
- #1461 Implement canvas adaptor @logancyang
- #1459 Support gpt 4.1 series, o4-mini and grok 3 @logancyang
- #1463 Switch insert and copy buttons and add more spacing @logancyang
- #1460 Add a toggle to turn custom prompt templating off @logancyang
- #1421 Ollama ApiKey support @sargreal
- #1441 refactor: Optimize some user experiences. @Emt-lin
- #1446 Improve custom command (v3) @zeroliu
- #1436 Pass project state to broca call @wenzhengjiang
- #1415 Add update notification @zeroliu
- #1414 Update broca requests @zeroliu
- #1385 Fix Azure OpenAI chat model baseURL construction logic. @doin4
- #1450 fix: Add a new line when press the Enter key on mobile. @Emt-lin
- #1457 Fix image in note logic @logancyang
In this release, multimodal LLMs can see the images in your note context! Official DeepSeek is added as a chat model provider, and streaming of its thinking tokens is supported as well! There are some other usability upgrades and bug fixes as well, check the change log for more details.
- #1404 Add DeepSeek official API provider and support thinking stream @logancyang
- #1398 Implement passing of images in note to LLM @logancyang
- #1391 Fix wikilinks in codeblocks @logancyang
- #1348 Improve command usability @zeroliu
- #1405 Encrypt embedding model api keys @logancyang
- #1397 Fix custom prompt with dash in its title @logancyang
- #1396 Fix gemini table generation and index integrity delay @logancyang
- #1351 Use simplified File Tree when it's larger than 0.5MB @wenzhengjiang
- #1380 Add plus-exclusive and believer-exclusive checks in embeddingManager @logancyang
Introducing User Custom Inline Commands! You can find them in the new Command tab in Copilot settings. Once you add your own commands, they appear in your right-click menu!
We decided to remove the old 2-level "Translate" and "Tone" commands since it's impossible to make the same prompt work for all models. You are encouraged to make your own commands using the builtin commands as examples. For example, for translation it's better to make your own command for the particular languages you need. (Note: this is inline-focused and is separate from Custom Prompts, you can still use custom prompts with slash or copilot commands same as before).
This release also has many UI/UX improvements: a chat input with better "generating" and "stop" display, better Relevant Notes display, better vault structure understanding in Plus mode, better vault search support for partial match on note titles, etc.
- #1332 Implement replace at cursor @logancyang
- #1329 Update auto index logic @logancyang
- #1328 Search improvements @logancyang
- #1327 Add description to custom command @zeroliu
- #1316 Enhance custom command @zeroliu
- #1298 Enhancement: Add file counts to file tree and remove file list from large file trees @wenzhengjiang
- #1284 File Tree: support exclusion and inclusion rules and simplify JSON structure @wenzhengjiang
- #1321 Add tooltip when exclusion/inclusion text overflows @zeroliu
- #1319 Improve relevant note UI @zeroliu
- #1318 Improve chat input UI @zeroliu
- #1305 Some UX optimizations @Emt-lin
- #1330 Add async mutex for thread-safe database upsert operations @logancyang
- #1306 Fix: Replacing Node's Buffer with npm's buffer package. Improves mobile compatibility @Emt-lin
- #1331 Fix: Do not stringify tool_output if it is already a string @wenzhengjiang
- #1320 Fix plus mode check @zeroliu
- #1303 Fix Azure OpenAI Instance Name not used for URL @tacticsiege
- Copilot Chat now has a collapsible block for the thought process of thinking models! 🔥
- Copilot Plus can answer questions about your vault structure starting from this release!
- A new UI for QA inclusion/exclusion filters. It helps avoid malformed inputs and provides a more streamlined experience.
- Copilot Plus should work on Android devices without issue now!
- #1266 Add support for rendering
<think>sections and fix RAG with reasoning models @logancyang - #1249 Implement getFileTree intent @wenzhengjiang
- #1261 Enhance inclusion/exclusion patterns settings @zeroliu
- #1264 Optimize the style of model item display. @Emt-lin
- #1275 Refactor note reference and fix dupe title issue @logancyang
- #1274 Show Vault Too Large notice @logancyang
- #1273 Refresh index should reindex files missing embeddings @logancyang
- #1272 Fix Azure OpenAI chat model @logancyang
- #1262 Remove language-specific prompt in command prompts @zeroliu
- #1260 Fix the issue of the safeFetch method not work on Android. @Emt-lin
- #1254 Fix the provider that requires verification. @Emt-lin
OpenAI O1-mini and O3-mini are added as built-in models! 🔥 You can add other O series models with "OpenAI" provider as well (Please confirm your tier with OpenAI and check if you have access to their O series API).
And we have a much better model table in the setting where you can add your own "display name" to your model, mark their capabilities "vision", "reasoning", "websearch", and drag-and-drop reorder them as you like! Thanks to @Emt-lin for the implementation!
For those who used copilot-plus-large to index their vault must do a force re-index to keep it working. We found the provider unstable so we switched to another provider. As the product matures there won't be such changes anymore. Sorry for the disruption 🙏
- #1225 Support custom model displayNames and reorderable Model list. @Emt-lin
- #1232 Adding support for Mistral as an LLM provider @o-mikhailovskii
- #1240 Add configurable batch size, update embedding requests per min @logancyang
- #1239 Add ModelCapability enum and capability detection @logancyang
- #1223 feat: update Gemini model names to v2.0 @anpigon
- #1238 Add openai o-series support @logancyang
- #1220 refactor: Improve source links formatting and rendering. @iinkov
- #1207 refactor: optimize the switching experience of the model. @Emt-lin
- #1242 Reduce binary size @zeroliu
- #1243 Fixed apikey not switching in custom model form @Emt-lin
- #1245 Remove custom base URL fallback in YouTube transcript retrieval @logancyang
- #1237 Update copilot-plus-large @logancyang
- #1227 Fix max tokens passing @logancyang
- #1226 fix: Handle undefined activeEmbeddingModels in settings sanitization @logancyang
Gemini 2.0 Flash is fresh out of the oven, and our copilot-plus-flash is using it! Covered by your license key! 🔥
- #1153 Use title format for note titles @iinkov
- #1045 Some user experience optimizations @Emt-lin
- #1206 Add believer exclusive model copilot-plus-large @logancyang
- #1205 Fix button focus color @zeroliu
- #1204 Do not trigger reindexing with matching index. Reenable plus welcome dialog @zeroliu
- #1203 Disable welcome modal @zeroliu
- #1197 Fix non-string tag crashing issue @zeroliu
- #1202 Stop waiting for license check onload @zeroliu
Our FIRST Plus chat model is here!! 🔥🔥🔥 copilot-plus-flash covered by your plus license key. Now, we have a plus chat model and 3 plus-exclusive embedding models available, a truly work-out-of-box experience without the need to bring your own API key! 🚀
- #1150 Add Copilot Plus Flash model for Plus users @logancyang
- #1194 Show newest version at the top of settings @logancyang
- #1193 Implement PDF cache @logancyang
- #1160 Improve Plus user onboarding @zeroliu
- #1157 Add fallback mechanism for YouTube transcript retrieval @logancyang
- #1154 Debounce settings input @zeroliu
- #1151 Catch and show invalid license key error @logancyang
- #1145 Improve test cases for time range @wenzhengjiang
- #1122 Attach plugin version to request headers @wenzhengjiang
- #1148 Avoid full vault scan on incremental indexing @zeroliu
- #1133 Fix UI issues with the textArea component @Emt-lin
- #1125 Fix button color @zeroliu
- #1145 Improve test cases for time range @wenzhengjiang
- #1151 Catch and show invalid license key error @logancyang
Enjoy much better image support! Now you can copy and paste images into the chat input in plus mode! And web images are also passed to the model if you include the URLs. A SOTA embedding model copilot-plus-large is added for plus users!
And note that the web search endpoint has been updated, please update to v2.8.2, or your web search @web won't work!
- #1116 Support different kinds of images (web url, local) @logancyang
- #1115 Enable image input for gemini flash 2.0 @logancyang
- #1095 Support copy-paste image @zeroliu
- #1107 Add copilot-plus-large embedding model @logancyang
- #1105 Update websearch endpoint @logancyang
- #1104 Update prompts for Copilot commands @logancyang
- #1096 Add file path to context suggestion @zeroliu
- #1108 Fix user message formatting and wrap codeblock for long lines @logancyang
- #1106 Add time tool tests @logancyang
Chat UI revamp as we move towards a more extensible design to clear the way for more features in the next iterations! Drag-and-drop images to chat input for Plus mode!
- #1074 Chat UI revamp @zeroliu
- #1085 Support drag-and-drop image @zeroliu
- #1059 Add support for customizable conversation filenames @Emt-lin
- #1076 Optimize Embedding model setting UX @Emt-lin
- #1055 Remove old settings UI @Emt-lin
- #1077 Update local copilot instructions for macOS @joshmedeski
- #1090 Fix onboarding db issue and more @logancyang
- Some new users reported that they see "fatal error" index doesn't exist, this should be fixed now. Just make sure you switch embedding model to openai and provide the openai API key!
- "Edit custom prompt" command was lost in 2.8.0 but it's back now!
- #1093 Update default conversation filename @logancyang
- #1091 Update settings @logancyang
- #1081 Fix time expressions @logancyang
Another massive update as we are fast approaching the official launch of Copilot Plus!! Completely revamped new Settings page with multiple tabs, a new inline editing experience with Copilot commands! You can also find some handy Copilot commands in your right-click menu!
-
#1051 Bump max sources for chunks to 128 @logancyang
- #1053 Show invalid license key only at 403 @logancyang
- #1052 Fix web image display @logancyang
- #1037 Fix youtube tool call @zeroliu
- #1035 Enforce deps check @zeroliu
- #1034 Fix cross platform encryption @logancyang
- If you find your API key not working across desktop and mobile, please re-enter them this time. They should be working cross-platform in the future!
Further address the performance issue in Relevant Notes, and show image and clickable note links in AI response. NaN scores from vault search are handled through reranking.
- #1018 Use reranking on NaN chunks @logancyang
- #1017 Handle note and image links in AI response @logancyang
- #1014 Enable react eslint @zeroliu
- #1013 Improve relevant note performance @zeroliu
- #1015 Listen to active note changes @zeroliu
- #1016 Handle NaN scores @logancyang
This is a quick one to address the performance issue in Relevant Notes, and add a new copilot-plus-multilingual embedding model for Plus users
- #1001 Add copilot plus multilingual embedding model @logancyang
- This enhancement introduces a multilingual embedding model to improve the versatility and accuracy of the copilot's suggestions across different languages.
- #998 Throttle number of links returned @zeroliu
- This fix addresses the issue of excessive link returns, optimizing the performance and relevance of the links provided by the copilot.
HUGE first release in 2025, a New Year gift for all Copilot users - introducing Relevant Notes in Copilot Chat! You can now see the collapsible Relevant Notes section at the top of the chat UI. It uses the same Copilot index you create for Vault QA. "Relevance" is determined by Copilot's own special algorithm, not just vector similarity. The entire feature is developed by our great @zeroliu, one of our top contributors 💪. Enjoy!
- #981 Relevant note new UI @zeroliu, a huge milestone for the Copilot plugin 🚀
- #989 Inspect index @logancyang, new command "Inspect Copilot index by note paths" to check the actual index JSON entries.
- #979 Clean up function args @zeroliu
- #980 Update tailwind color config @zeroliu
- #996 Fix large input scroll @logancyang
- #988 Fix tags in indexing filter @logancyang
- Fix a critical issue in the index partitioning logic.
- Disable auto version check for now.
Happy holidays everyone! Thanks for your support in 2024! The highlight of this update is a MUCH faster indexing process with batch embedding, and a strong (stronger than openai embedding large) but small embedding model exclusive for Plus users called copilot-plus-small, it just works with a Plus license key! Let me know how it goes!
- #969 Enable batch embedding and add experimental
copilot-plus-smallembedding model for Plus users @logancyang - #964 Increase the number of partitions. Skip empty files during indexing @logancyang
- #958 Update system prompt to better treat user language and latex equations @logancyang
- #961 Fix Radix portal @zeroliu
- #967 Fix lost embeddings critical bug @logancyang
- #952 Add a small delay to avoid race conditions @logancyang
A BIG update incoming!
- A more robust indexing module is introduced. Partitioned indexing can handle extremely large vaults now!
- LM Studio has been added as an embedding provider, it's lightning-fast!
- A "Verify Connection" button is added when you add a Custom Model, so you can check if it works before you add it! (This was first implemented by @Emt-lin, updated by @logancyang)
Check out the details below!
-
Big upgrade of indexing logic to have a more robust UX
-
Enable incremental indexing. Now "refresh index" respects inclusion/exclusion filters
-
Inclusion filters no longer eclipses exclusion filters.
-
Add the "Remove files from Copilot index" command that takes in the same list format from "List indexed files"

-
Add confirmation modal for actions in settings that lead to reindexing

-
Update the max sources setting to 30 per user request. Be warned: a large number of sources may lead to bad answer quality with weaker chat models

-
Add metadata to context, now you can directly ask "what files did i create/modified in (time period)"
- Fix safeFetch for 3rd party API with CORS on, including moonshot API and perplexity API, etc.
- Fix time-based queries for some special cases
- Enhance vault search with current time info
- Fix file already exists error for list indexed files
- Fix web search request (safeFetch GET)
- #916 Refresh VectorStoreManager at setting changes @logancyang
- #918 Brevilabs CORS issue @logancyang
- #917 Clear chat context on new chat @logancyang
- #908 Add setting to exclude copilot index in obsidian sync @logancyang
- #906 Update current note in context at change @logancyang
- #913 Validate or invalidate current model when api key is updated @logancyang
- #912 Fix Index not loaded, add better index checks for a fresh install @logancyang
- #911 Avoid using jotai default store @zeroliu
- #893 New users could not load the plugin @logancyang
Great news, no more "Save and Reload" thanks to @zeroliu ! Settings now save automatically! 🚀🚀🚀
- #890 Implement indexing checkpointing @logancyang
- #886 UX improvements (Fix long titles in context menu, chat error as AI response, etc.) @logancyang
- #882 Add user message shade @logancyang
- #881 Copilot command: list all indexed files in a markdown note @logancyang
- #874 Auto save settings @zeroliu
- Settings now automatically save after changes without requiring manual save and reload!!
- #872 Add New chat confirm modal, restructure components dir @logancyang
- #851 Support certain providers to customize the base URL @Emt-lin
- #850 Fix system message handling for o1-xx models, convert systemMessage to aiMessage for compatibility @Emt-lin
- #880 Append user system prompt instead of override @logancyang
- #846 Fix disappearing note in context menu @logancyang
- #845 Make open window command focus active view @zeroliu
- #887 Fix note cannot be removed bug @logancyang
- #873 Fixed URL mention behavior in Chat mode @logancyang
- #824 Improve settings and chat focus @zeroliu
- #843 Implement Copilot command "list indexed files" @logancyang
- #842 Fix system message for non-openai models @logancyang
- #843 Alpha quick fixes @logancyang
- Fix message edit
- Unblock saveDB
- Skip rerank call if max score is 0
- Fix double indexing trigger at mode switch
Copilot Plus Alpha is here! I've been working on this for a long time! Test license key is on its way to project sponsors and early supporters.
- Time-based Queries: Ask questions like
Give me a recap of last week @vaultorList all highlights from my daily notes in Oct @vault. Copilot Plus understands time! - Cursor-like Context Menu: Enjoy a more intuitive and streamlined context menu specifically designed for Plus Mode. It not only shows note titles but also PDF files and URLs!
- URL Mention Capability: Quickly reference URLs in your chat input. Copilot Plus can grab the webpage in the background!
- Vault Search with Cmd + Shift + Enter: Search your vault with a simple keyboard shortcut, this is equivalent to having
@vaultin your query. - Dynamic Note Reindexing: Copilot index is updated at note modify event under Copilot Plus mode (this is not the case in Vault QA basic mode), ensuring your data is always up-to-date.
- Image Support in Chat: Add and send image(s) in your chat for any LLMs with vision support.
- PDF Integration in Chat Context: Easily incorporate PDF file or notes with embedded PDF in your chat context.
- Web Search Functionality: Access the web directly from your Copilot Plus Mode with
@web. - YouTube Transcript: Easy access to video transcript with
@youtube video_urlin chat.
- #839 Add Copilot Plus suggested prompts @logancyang
- #838 Return YouTube transcript directly without LLM for long transcripts @logancyang
- #835 Introduce Copilot Plus Alpha to testers @logancyang
- #826 Fix delete message in memory @logancyang
- #825 Fix "index not loaded" @logancyang
- #812 Fix model and mode menu side offset @logancyang
-
Add setting to optionally disable index loading on mobile to save resources @logancyang

-
Use Lucide icons to replace custom SVG icons @zeroliu
-
Refactor chat control tooltips @zeroliu
-
Sample 2 vault QA prompts for vault QA mode without replacement @logancyang
- Fix suggested prompts responsiveness @zeroliu
- Fix upsert error @logancyang
-
#774 Optimize model setting style for mobile devices @Emt-lin 🚀

-
#750 Allow entering [[ anywhere in the prompt @zeroliu
-
#778 Avoid Gemini SAFETY blocks @logancyang
- #781 Fix garbage collection command @logancyang
- #779 Fix Ollama embedding context length with truncation @logancyang
- Fix issue where user's setting exclusion files are not excluded when calculating tokens @Emt-lin
- #723 Support exclude files from indexing by name pattern @Emt-lin
- #706 Add default mode in settings so it keeps your mode selection
- #702 Migrate from PouchDB to Orama.
- Now we don't have any dependency that blocks mobile!
- Your new index file is at
.obsidian/copilot-index-<hash>.json
- Remove Long Note QA mode and Send Note(s) to Prompt button in Chat mode since they are legacy features that are covered by new experience: Vault QA with note title mention, slash command and templating
- #707
- #699
Indexing improvements
-
Added button for pausing and resuming indexing that also shows the exclusion setting

-
Added support for exclusion by tags (must be in note property, not the content body, similar to how custom prompt templating works)

Improved QA in this release! Significant upgrades to Vault QA mode coming soon.
- Implement HyDE for Vault QA mode #645
- Add Google embedding model and update langchain #651 by @o-mikhailovskii
- Bug fixes
- System prompt in QA modes #692
- Fix new chat not stopping streaming @Emt-lin
- Fix language identification for changing tone command @Emt-lin
- Fix AI message wrapping
- @Emt-lin: enable Perplexity API with CORS on #673. Related issues:
- #424
- #431
- #661
- Fixes #670
- Internal improvement: pass note to LLM in md format
- Fixes #663
-
#665 Messages now have timestamps! Saved conversations have timestamps too.
-
#656 @Emt-lin now our custom prompts are sorted with most recently used first
-
#659 @logicsec we can tag saved conversations in Copilot setting
-
Bug fix for mobile not loading
Some UX improvements
- Enable renaming of custom prompt in Edit Custom Prompt command modal #635
- Revert auto-scroll as it streams behavior to scroll to bottom only when streaming is done, avoid jittery auto-scrolling, and fix up and down arrow key navigation for some corner cases #632
- Fix a bug where cursor is not focused in chat input when Copilot Chat pane is toggled on #593
Welcome first-time contributor @Emt-lin
Another big one!
- Custom prompt template support in Chat!! Now you can just type
/and bring up the list of custom prompts you have. Selecting one fills it into the chat input box! {activeNote}added to custom prompt template! Many people have been asking for this.- Up and down arrow keys now navigate your user messages! (not persisted, clears at reload)
- Cohere API setting now in API settings section instead of QA settings, because we have Command R and R+ as builtin chat models!
- Some UX improvements
- When autosave for conversation is on, saved convo doesn't open at plugin reloads, a notice banner shows up instead.
- When deleting a default model, the default is reset to gpt-4o, the "grand default".
- #619
- #620
- #621
- #626
- #629
Bug fixes
- #608
- #609
- Cohere embedding model name issue
- Add custom prompt without folder created
- Update local copilot guide for new settings
We are migrating off of PouchDB for better Obsidian Sync and mobile support. In this release, your existing custom prompts must be dumped to markdown using the command "Copilot: Dump custom prompts to markdown files". After running it you should be able to use your Add/Edit/Apply/Delete custom prompts as usual.
Please make sure you run it, or you will lose all your old prompts when PouchDB is removed!
- Load Copilot Chat conversation via new command "Copilot: Load Copilot Chat conversation".
- New setting toggle for chat autosave, automatically save your chat whenever you click new chat or reload the plugin.
- Custom prompts saved in markdown
A self-hosted Ollama issue. #598
- #600
- #602
- #604
Implemented new chat buttons, now:
- User has Copy, Edit
- AI has Copy, Insert to note at cursor, Regenerate
Note that editing user message will trigger regenerate automatically when done.
And bug fixes.
- #585
- #586
- #588
- #594
Quick bug fixes
- #581
- #582
-
Huge thanks to our awesome @gianluca-venturini for his incredible work on mobile support! Now you can use Copilot on your phone and tablet! 🎉🎉🎉
-
Complete rehaul of how models work in Copilot settings. Now you can add any model to your model picker provided its name, model provider, API key and base url! No more waiting for me to add new models!

-
Say goodbye to CORS errors for both chat models and embedding! The new model table in settings now lets you turn on "CORS" for individual chat models if you see CORS issue with them.
- Embedding models are immune to CORS errors by default!
- Caveat: this is powered by Obsidian API's
requestUrlwhich does not support "streaming" of LLM responses. So streaming is disabled whenever you have CORS on in Copilot settings. Please upvote this feature request to let Obsidian know your need for streaming!
Another long-awaited major update: message styling revamp, plus math and code syntax highlighting support! 🎉🎉🎉

- Now the messages are more compact and clean, with better math, code and table support.
- The Send button turns to Stop button when it's streaming, old Stop button is gone.
- Some housekeeping and minor tweaks
- Refactored Settings components
- Added prettier and husky for formatting pre-commit hook
- Show default system prompt as placeholder for better visibility
- Bug fix: find notes by path corner case
- Community contribution: @pontabi 's first ever PR, aligns Copy button at the bottom right of messages
We have some awesome updates this time!
-
No more CORS errors for any OpenAI replacement API! Now you can use any 3rd party OpenAI replacement without CORS issue with the new toggle in Advanced settings. Big thanks to @Ebonsignori! #495

-
GEMINI 1.5 PRO and GEMINI 1.5 FLASH added! Thanks to @anpigon #497

-
Custom model fields added for OpenAI and Google. Note that when OpenAI proxy base URL is present, the override logic is: proxy model name > custom model name (this addition) > model dropdown. #499

-
Add setting to turn built-in Copilot commands on and off to reduce command menu clutter #500

-
Fix 2 long time bugs where user messages are duplicated in saved note, and custom prompt commands missing when note not focused #501 #502
-
GPT-3 models are removed since GPT-4o-mini is superior in every way.
-
When switching models, the actual model name used in the API call is shown in the Notice banner, better for debugging.
Sorry for the delay folks, I was afk for quite a while but am back now!
- GPT 4o and mini are added.
- "Claude 3" renamed to just "Claude" and defaults to the new best
claude-3-5-sonnet-20240620model (reset or manual input required) - Fix a bug where source link is broken when vault name has spaces
- Groq is added
- OpenAI organization id added
- Summarize Selection added to context menu
- fish cors example
Big thanks to all community contributions!! #482, #446, #445, #441, #436
- Fixed a bug where frontmatter parsing was failing
- Fix missing command #353
- Add exclude filter for indexing #334
- Implement a first iteration of the custom retriever #331
- Implement note title mention in Chat and Vault QA mode
- Now if you type
[[it will trigger a modal for a list of all note titles to pick from - In Chat mode, a direct
[[]]note title mention sends the note content in the prompt in the background, similar to how custom prompts work. - In Vault QA mode, a direct
[[]]note title mention ensures that the retriever puts that note at the top of the source notes
- Now if you type
Bug fixes
Re-indexing for Vault QA is recommended!
- Brand new Vault QA (BETA) mode! This is a highly-anticipated feature and is a big step forward toward the vision of this plugin. Huge shoutout to @AntoineDao for working with me on this! #285
- Implement more sophisticated chunking and QA flow
- Rename current QA to Long Note QA
- Fix Long Note QA logic
- Add a list of clickable "Source Notes" titles below AI responses
- Show the chunks retrieved in debug info.
- Add command to Index Vault for QA
- Refresh Index button
- Add another one Force complete re-index for Vault QA
- Add notice banner for indexing progress
- Local embedding integration with Ollama
- Add max sources setting
- Add strategy ON_MODE_SWITCH, calls refresh index on mode switch
- Add count total token of vault command, and language in settings for cost estimation.
- Claude 3 integration. You can set the actual Claude 3 model variant in the setting. Default is
claude-3-sonnet-20240229
- Fix a bug where chat context is not set correctly @Lisandra-dev #304
- Enable model name, embedding provider url, embedding model name overrides for various OpenAI drop-in replacement providers like one-api etc. #305
- Add encryption for API keys #306
- Update Ollama context window setting instruction #307
- Add filter notes by tags in "Set note context in Chat mode" command #291
- Add filter notes by tags in Advanced Custom Prompt #296
- (Chore) Remove all the different Azure model choices and leave one AZURE OPENAI to avoid confusion. The actual Azure model is set in the settings.
- Fix a bug where model switch fails after copilot commands #298
-
Introducing advanced custom prompt! Now custom prompts don't require a text selection, and you can compose long and complex prompts by referencing a note or a folder of notes! #281

-
Enable setting the full LM Studio URL instead of just the port #283

- Allow sending multiple notes to the prompt with one click in Chat mode! You can specify the note context using the new Copilot command
Set note context for Chat mode#265 - Add ad-hoc custom prompt for selection. Thanks to @SeardnaSchmid #264
Bug fixes
- Only init embedding manager when switching to QA mode
- Avoid OpenAI key error when it's empty but the model or embedding provider is not set as OpenAI
- Add back Azure embedding deployment name setting
-
Add the new OpenAI models announced today
-
2 new embedding models small and large. Small is better than ada v2 but 1/5 the cost! Large is slightly more expensive than the old ada v2 but has much better quality.
-
New
gpt-4-turbo-previewalias that's pointing togpt-4-0125-preview, and newgpt-3.5-turbo-0125(already covered by aliasgpt-3.5-turbo. -
For more details check the OpenAI announcement page
- Use LCEL for both Chat and QA chains, and use multi-query retriever to increase recall
- Add running dots indicator when loading AI messages since conversational QA with LCEL and multi-query retriever is a bit slower. Show the user it's not stuck, just loading

- Change
Conversationmode toChatmode, andQA: Active Noteto justQAto prepare for QA over the whole vault mode. - Add a button to send the active note directly into the prompt in Chat mode. This button shows only in Chat mode, and it becomes the index button in QA mode.

-
Add OpenRouterAI as a separate option in model dropdown. You can specify the actual model in the setting. OpenRouter serves free and uncensored LLMs! Visit their site to check the models available https://openrouter.ai/

-
Bumped max tokens to 10000, and max conversation turns to 30
- Add LM Studio and Ollama as two separate options in the model dropdown
- Add setup guide
- Remove LocalAI option
- Add google api key in settings
-
Add Gemini Pro model
-
Add Save and Reload button to avoid manually toggling the plugin on and off every time settings change. Now, clicking on either button triggers a plugin reload to let the new settings take effect

-
Fix error handling
- No more "model_not_found" when the user has no access to the model, now it explicitly says you have no access
- Shows the missing API key message when the chat model is not properly initialized
- Shows model switch failure when Azure credentials are not provided
-
Show the actual model name and chain type used in debug messages
-
Make
gpt-4-turbothe default model
- Upgraded langchainJS to v0.0.212
- Fix bugs and UX issues
- IME for east Asian languages now does not send on Enter
- OpenAI proxy base URL also overrides for the embedding model #211
- Clearing vector store should not affect new instance creation
- Add the new shiny GPT-4 TURBO model that has 128K context length! (I noticed that this new model is now very fast and the older ones including GPT-3 are becoming slower. Not sure if it's just me. Let me know if this happens to you too!)
-
Implement cross-session local vector store using PouchDB
- Thanks to @Sokole1's contribution, Local Copilot does not need a proxy server and can just use the OpenAI Proxy Base URL setting. Pls check the updated setup guide!
- Add proxy server for LocalAI
- Implement local model access
- Add LocalAI as an embedding provider
- Add a step-by-step guide for LocalAI setup for Apple Silicon and Windows WSL
- Created youtube demo video for v2.4.0
- Add support for 3rd party OpenAI proxy (mainly for users who cannot access OpenAI directly) #113

- Add command "Toggle Copilot Chat Window in Note Area" to toggle the Chat UI in the main note area. Good for consumption with smaller screens. #102, #5
- Fix system prompt bug #104
- Set AI chat font size using global font size setting in Obsidian. The chat font is always 2px smaller than the global font size. #92
- Fix Stop Streaming in QA mode #54
- Add Azure gpt35 16k #101
- Add Azure OpenAI as an embedding provider #81
- Fix default model not respected bug
- Added gpt-3.5-turbo-16k, gpt-4-32k and Azure OpenAI ones
- Force index rebuild when the button is clicked
- Add the new models to settings
- Fix UI issue where narrow chat view makes buttons inaccessible
- Add CohereAI as an embedding provider, it is FREE and stable!
- Use contextual compression retriever for QA
- Fix a bug where Rebuild Index button does not switch note context on first click
- Add "Edit custom prompt" command. Note that Title cannot be edited!
- Turn on mobile support to test it out.
Fix bug where plugin fails to load silently without OpenAI key
- Fix bug where copilot commands output in English when the source language is not English
- User custom prompt! Now you can create your own prompt as a command, the only limit is your imagination!
- To avoid confusion, the "Chain Selection" dropdown is renamed to "Mode Selection", and the "Use Active Note as Context" button is renamed to "Rebuild index for active note". It is not necessary to click this button every time before switching to "QA: Active Note" mode. And the button is moved to the right side of the dropdown.
- Local PouchDB integration to support local prompt library.
The biggest release yet!
- LangchainJS integration: allow more chain types to be used.
- In-memory vectordb powered QA, unlimited context for active note!
- Use sliders to set temperature, max token and conversation turns to avoid form input issues on different platforms.
- New token count command
- Migrate to LangChainJS to enable a lot more potential features and upgrades!
- Add flag
isVisibleto show chat messages optionally - Fix "Use Active Note as Context" functionality
- Auto focus on the chat window's input text area when the window is toggled on
- Add user custom system prompt advanced setting
- Use toggle instead of dropdown for streaming and debugging mode settings
- Fix CSS conflicts with default styling
- Add better OpenAI error messages
- Fix typo for the "table of contents" command
- Add community plugin installation guide in readme
- Add a number of commands
- summarization
- eli5
- change tone
- fix grammar and spelling
- generate table-of-contents
- generate glossary
- press release
- make longer and shorter
- a number of new languages in translation suggestions
- Add new dev mode setting
- Add
requestUrlfrom Obsidian API for non-streaming option - Re-implemented streaming using SSE
- Various fixes including stop streaming and new chat handling
Add new commands for selection
- Simplify
- Emojify
- Remove URLs
- Translate
- Rewrite into tweet/thread
Fix css li specificity bug that causes li marker problems in note reading mode.
Initial release.


























