No speedup. MacBook Pro 13, M1, 16GB, Ollama, orca-mini.
Local GPT assistance for maximum privacy and offline access.
The plugin allows you to open a context menu on selected text to pick an AI-assistant's action.
Also works with images
No speedup. MacBook Pro 13, M1, 16GB, Ollama, bakllava.
Default actions:
- Continue writing
- Summarize text
- Fix spelling and grammar
- Find action items in text
- General help (just use selected text as a prompt for any purpose)
You can also add yours, share the best actions or get one from the community.
Supported AI Providers:
- Ollama
- OpenAI compatible server (also OpenAI)
![Settings](https://private-user-images.githubusercontent.com/584632/296392497-6ab2d802-13ed-42be-aab1-6a3f689b18a0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjI5NDQzOTYsIm5iZiI6MTcyMjk0NDA5NiwicGF0aCI6Ii81ODQ2MzIvMjk2MzkyNDk3LTZhYjJkODAyLTEzZWQtNDJiZS1hYWIxLTZhM2Y2ODliMThhMC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwODA2JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDgwNlQxMTM0NTZaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT03ODliNGEwMTg3MTRkMmM3ZDkxYWIzY2U1MGEyMWM5MWI4ZjNmNDEzNGQ2YjI0MTMxZmMxNTE2NTA5NDI4MTNlJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.xuZ7hFtwq9xowSgqMtbphcqI8uAq9yuQuIgG4gF0uJA)
This plugin is available in the Obsidian community plugin store https://obsidian.md/plugins?id=local-gpt
You can also install this plugin via BRAT: pfrankov/obsidian-local-gpt
- Install Ollama.
- Install Qwen2 (default)
ollama pull qwen2
or any preferred model from the library.
Additional: if you want to enable streaming completion with Ollama you should set environment variable OLLAMA_ORIGINS
to *
:
- For MacOS run
launchctl setenv OLLAMA_ORIGINS "*"
. - For Linux and Windows check the docs.
There are several options to run local OpenAI-like server:
- llama.cpp
- llama-cpp-python
- LocalAI
- Obabooga Text generation web UI
- LM Studio
- ...maybe more
- Open Obsidian Settings
- Go to Hotkeys
- Filter "Local" and you should see "Local GPT: Show context menu"
- Click on
+
icon and press hotkey (e.g.⌘ + M
)
It is also possible to specify a fallback to handle requests — this allows you to use larger models when you are online and smaller ones when offline.
Example video
Kapture.2024-01-11.at.22.16.52.mp4
Since you can provide any OpenAI-like server, it is possible to use OpenAI servers themselves.
Despite the ease of configuration, I do not recommend this method, since the main purpose of the plugin is to work with private LLMs.
- Select
OpenAI compatible server
inSelected AI provider
- Set
OpenAI compatible server URL
tohttps://api.openai.com
- Retrieve and paste your
API key
from the API keys page - Click "refresh" button and select the model that suits your needs (e.g.
gpt-3.5-turbo
)
- Colored Tags that colorizes tags in distinguishable colors.