Skip to content

Releases: pfrankov/obsidian-local-gpt

1.14.3

18 Oct 13:04
Compare
Choose a tag to compare

Fixed delay between first load and showing settings tab. Closes #40

1.14.2

15 Oct 20:05
Compare
Choose a tag to compare

Changed heuristic for setting the context length of the requests introduced in 1.12.0. Previously it could cause problems with requests larger than 2048 tokens.

1.14.1

12 Oct 20:51
Compare
Choose a tag to compare

Fixed PDF caching. It didn't work in 1.14.0 if you updated from 1.13+
Increased Context cap from 7000 to 10000 characters for very long requests.

1.14.0

10 Oct 21:53
Compare
Choose a tag to compare

Added nice ✨Enhancing loader for embedding process

Added PDF caching. No more waiting of parsing PDFs.

Limited Context to 7000 characters. It should be enough for anyone. This will provide more precise responses for Enhanced Actions.

1.13.1

07 Oct 20:16
Compare
Choose a tag to compare

Added first portion of tests.
Fixed network requests on mobile. Closes #35

1.13.0

06 Oct 16:52
Compare
Choose a tag to compare

🎉 PDF support for Enhanced Actions

Kapture 2024-10-06 at 19 28 01
Works only with text-based PDFs. No OCR.

Persistent storage for Enhanced Actions cache

So it persist even after restart of Obsidian.
This significantly speeds up work with documents that have already been used for EA and have not changed.
Check out what the first and second calls of the same 8 nested documents (39 chunks) look like:
Kapture 2024-10-06 at 19 39 38

Note: after changing the model for embedding, the caches are reset.

1.12.0

29 Sep 20:30
Compare
Choose a tag to compare

Migrated providers from fetch to remote.net.request. Closes #26
Avoiding CORS issues and improving performance.

Refactor AI provider and embedding functionality, add optimize model reloading
By default, the Ollama API has 2048 tokens limit even for the largest models. So there are some heuristics to provide full context window if needed as well as to optimize the VRAM consumption.

Added cache invalidation after changing an embedding model
Before the change, the cache was not invalidated even if the embedding model was changed. That's critical because the embeddings are not interchangeable between models.

Added prompt templating for context and selection

Context information is below.
{{=CONTEXT_START=}}
---------------------
{{=CONTEXT=}}
{{=CONTEXT_END=}}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {{=SELECTION=}}
Answer:

More about Prompt templating in prompt-templating.md

1.11.0

22 Sep 21:23
Compare
Choose a tag to compare

Fixed an issue where the plugin was using the current document for Enhanced Actions. It is now ignored in any case.

Fixed an issue with positioning of text stream and its final display - they used to be different.

Added highlighting of new text in the stream:
Kapture 2024-09-23 at 00 20 52

1.10.0

16 Sep 00:10
Compare
Choose a tag to compare

🎉 Implemented Enhanced Actions

Or ability to use the context from links and backlinks or just RAG (Retrieval Augmented Generation).
1726439231929851

The idea is to enhance your actions with relevant context. And not from your entire vault but only with the related docs. It perfectly utilises the Obsidian's philosophy of linked documents.

Now you can create richer articles while writing, more in-depth summaries on the whole topic, you can ask your documents, translate texts without losing context, recap work meetings, conduct brainstorming sessions on a given topic...
Share your applications of the Enhanced Actions in the Discussion.

Setup

1. You need to install embedding model for Ollama:

  • For English: ollama pull nomic-embed-text (fastest)
  • For other languages: ollama pull bge-m3 (slower, but more accurate)

Or just use text-embedding-3-large for OpenAI.

2. Select Embedding model in plugin's settings

And try to use the largest Default model with largest context window.
image

3. Select some text and run any action on it

No additional actions required. No indication for now but you can check the quality of the results.
image

1.9.0

07 Sep 21:21
Compare
Choose a tag to compare

Changed default model to Gemma 2: 9B

Added a New System Prompt action for creating actions tailored to user needs.
image
image
image

In Settings added two lines limit for Prompt and also System Prompt. Closes #27
image