Skip to content

Latest commit

 

History

History
869 lines (665 loc) · 38.4 KB

README.md

File metadata and controls

869 lines (665 loc) · 38.4 KB

GitHub GitHub Repo stars GitHub closed issues GitHub closed pull requests GitHub contributors Static Badge Discord

Gp.nvim (GPT prompt) Neovim AI plugin


ChatGPT like sessions, Instructable text/code operations, Speech to text and Image generation in your favorite editor.

Youtube demos

Goals and Features

The goal is to extend Neovim with the power of GPT models in a simple unobtrusive extensible way.
Trying to keep things as native as possible - reusing and integrating well with the natural features of (Neo)vim.

  • Streaming responses
    • no spinner wheel and waiting for the full answer
    • response generation can be canceled half way through
    • properly working undo (response can be undone with a single u)
  • Infinitely extensible via hook functions specified as part of the config
  • Minimum dependencies (neovim, curl, grep and optionally sox)
    • zero dependencies on other lua plugins to minimize chance of breakage
  • ChatGPT like sessions
    • just good old neovim buffers formated as markdown with autosave and few buffer bound shortcuts
    • last chat also quickly accessible via toggable popup window
    • chat finder - management popup for searching, previewing, deleting and opening chat sessions
  • Instructable text/code operations
    • templating mechanism to combine user instructions, selections etc into the gpt query
    • multimodal - same command works for normal/insert mode, with selection or a range
    • many possible output targets - rewrite, prepend, append, new buffer, popup
    • non interactive command mode available for common repetitive tasks implementable as simple hooks
      (explain something in a popup window, write unit tests for selected code into a new buffer,
      finish selected code based on comments in it, etc.)
    • custom instructions per repository with .gp.md file
      (instruct gpt to generate code using certain libs, packages, conventions and so on)
  • Speech to text support
    • a mouth is 2-4x faster than fingers when it comes to outputting words - use it where it makes sense
      (dicating comments and notes, asking gpt questions, giving instructions for code operations, ..)
  • Image generation
    • be even less tempted to open the browser with the ability to generate images directly from Neovim

Install

1. Install the plugin

Snippets for your preferred package manager:

-- lazy.nvim
{
    "robitx/gp.nvim",
    config = function()
        local conf = {
            -- For customization, refer to Install > Configuration in the Documentation/Readme
        }
        require("gp").setup(conf)

        -- Setup shortcuts here (see Usage > Shortcuts in the Documentation/Readme)
    end,
}
-- packer.nvim
use({
    "robitx/gp.nvim",
    config = function()
        local conf = {
            -- For customization, refer to Install > Configuration in the Documentation/Readme
        }
        require("gp").setup(conf)

        -- Setup shortcuts here (see Usage > Shortcuts in the Documentation/Readme)
    end,
})
-- vim-plug
Plug 'robitx/gp.nvim'

local conf = {
    -- For customization, refer to Install > Configuration in the Documentation/Readme
}
require("gp").setup(conf)

-- Setup shortcuts here (see Usage > Shortcuts in the Documentation/Readme)

2. OpenAI API key

Make sure you have OpenAI API key. Get one here and use it in the 4. Configuration. Also consider setting up usage limits so you won't get suprised at the end of the month.

The OpenAI API key can be passed to the plugin in multiple ways:

Method Example Security Level
hardcoded string openai_api_key: "sk-...", Low
default env var set OPENAI_API_KEY environment variable in shell config Medium
custom env var openai_api_key = os.getenv("CUSTOM_ENV_NAME"), Medium
read from file openai_api_key = { "cat", "path_to_api_key" }, Medium-High
password manager openai_api_key = { "bw", "get", "password", "OAI_API_KEY" }, High

If openai_api_key is a table, Gp runs it asynchronously to avoid blocking Neovim (password managers can take a second or two).

3. Multiple providers

The following LLM providers are currently supported besides OpenAI:

  • Ollama for local/offline open-source models. The plugin assumes you have the Ollama service up and running with configured models available (the default Ollama agent uses Llama3).
  • GitHub Copilot with a Copilot license (zbirenbaum/copilot.lua or github/copilot.vim for autocomplete). You can access the underlying GPT-4 model without paying anything extra (essentially unlimited GPT-4 access).
  • Perplexity.ai Pro users have $5/month free API credits available (the default PPLX agent uses Mixtral-8x7b).
  • Anthropic to access Claude models, which currently outperform GPT-4 in some benchmarks.
  • Google Gemini with a quite generous free range but some geo-restrictions (EU).
  • Any other "OpenAI chat/completions" compatible endpoint (Azure, LM Studio, etc.)

Below is an example of the relevant configuration part enabling some of these. The secret field has the same capabilities as openai_api_key (which is still supported for compatibility).

	providers = {
		openai = {
			endpoint = "https://api.openai.com/v1/chat/completions",
			secret = os.getenv("OPENAI_API_KEY"),
		},

		-- azure = {...},

		copilot = {
			endpoint = "https://api.githubcopilot.com/chat/completions",
			secret = {
				"bash",
				"-c",
				"cat ~/.config/github-copilot/hosts.json | sed -e 's/.*oauth_token...//;s/\".*//'",
			},
		},

		pplx = {
			endpoint = "https://api.perplexity.ai/chat/completions",
			secret = os.getenv("PPLX_API_KEY"),
		},

		ollama = {
			endpoint = "http://localhost:11434/v1/chat/completions",
		},

		googleai = {
			endpoint = "https://generativelanguage.googleapis.com/v1beta/models/{{model}}:streamGenerateContent?key={{secret}}",
			secret = os.getenv("GOOGLEAI_API_KEY"),
		},

		anthropic = {
			endpoint = "https://api.anthropic.com/v1/messages",
			secret = os.getenv("ANTHROPIC_API_KEY"),
		},
	},

Each of these providers has some agents preconfigured. Below is an example of how to disable predefined ChatGPT3-5 agent and create a custom one. If the provider field is missing, OpenAI is assumed for backward compatibility.

	agents = {
		{
			name = "ChatGPT3-5",
			disable = true,
		},
		{
			name = "MyCustomAgent",
			provider = "copilot",
			chat = true,
			command = true,
			model = { model = "gpt-4-turbo" },
			system_prompt = "Answer any query with just: Sure thing..",
		},
	},

4. Dependencies

The core plugin only needs curl installed to make calls to OpenAI API and grep for ChatFinder. So Linux, BSD and Mac OS should be covered.

Voice commands (:GpWhisper*) depend on SoX (Sound eXchange) to handle audio recording and processing:

  • Mac OS: brew install sox
  • Ubuntu/Debian: apt-get install sox libsox-fmt-mp3
  • Arch Linux: pacman -S sox
  • Redhat/CentOS: yum install sox
  • NixOS: nix-env -i sox

5. Configuration

Below is a linked snippet with the default values, but I suggest starting with minimal config possible (just openai_api_key if you don't have OPENAI_API_KEY env set up). Defaults change over time to improve things, options might get deprecated and so on - it's better to change only things where the default doesn't fit your needs.

gp.nvim/lua/gp/config.lua

Lines 10 to 607 in a88225e

local config = {
-- Please start with minimal config possible.
-- Just openai_api_key if you don't have OPENAI_API_KEY env set up.
-- Defaults change over time to improve things, options might get deprecated.
-- It's better to change only things where the default doesn't fit your needs.
-- required openai api key (string or table with command and arguments)
-- openai_api_key = { "cat", "path_to/openai_api_key" },
-- openai_api_key = { "bw", "get", "password", "OPENAI_API_KEY" },
-- openai_api_key: "sk-...",
-- openai_api_key = os.getenv("env_name.."),
openai_api_key = os.getenv("OPENAI_API_KEY"),
-- at least one working provider is required
-- to disable a provider set it to empty table like openai = {}
providers = {
-- secrets can be strings or tables with command and arguments
-- secret = { "cat", "path_to/openai_api_key" },
-- secret = { "bw", "get", "password", "OPENAI_API_KEY" },
-- secret : "sk-...",
-- secret = os.getenv("env_name.."),
openai = {
disable = false,
endpoint = "https://api.openai.com/v1/chat/completions",
-- secret = os.getenv("OPENAI_API_KEY"),
},
azure = {
disable = true,
endpoint = "https://$URL.openai.azure.com/openai/deployments/{{model}}/chat/completions",
secret = os.getenv("AZURE_API_KEY"),
},
copilot = {
disable = true,
endpoint = "https://api.githubcopilot.com/chat/completions",
secret = {
"bash",
"-c",
"cat ~/.config/github-copilot/hosts.json | sed -e 's/.*oauth_token...//;s/\".*//'",
},
},
ollama = {
disable = true,
endpoint = "http://localhost:11434/v1/chat/completions",
secret = "dummy_secret",
},
lmstudio = {
disable = true,
endpoint = "http://localhost:1234/v1/chat/completions",
secret = "dummy_secret",
},
googleai = {
disable = true,
endpoint = "https://generativelanguage.googleapis.com/v1beta/models/{{model}}:streamGenerateContent?key={{secret}}",
secret = os.getenv("GOOGLEAI_API_KEY"),
},
pplx = {
disable = true,
endpoint = "https://api.perplexity.ai/chat/completions",
secret = os.getenv("PPLX_API_KEY"),
},
anthropic = {
disable = true,
endpoint = "https://api.anthropic.com/v1/messages",
secret = os.getenv("ANTHROPIC_API_KEY"),
},
},
-- prefix for all commands
cmd_prefix = "Gp",
-- optional curl parameters (for proxy, etc.)
-- curl_params = { "--proxy", "http://X.X.X.X:XXXX" }
curl_params = {},
-- log file location
log_file = vim.fn.stdpath("log"):gsub("/$", "") .. "/gp.nvim.log",
-- write sensitive data to log file for debugging purposes (like api keys)
log_sensitive = false,
-- directory for persisting state dynamically changed by user (like model or persona)
state_dir = vim.fn.stdpath("data"):gsub("/$", "") .. "/gp/persisted",
-- default agent names set during startup, if nil last used agent is used
default_command_agent = nil,
default_chat_agent = nil,
-- default command agents (model + persona)
-- name, model and system_prompt are mandatory fields
-- to use agent for chat set chat = true, for command set command = true
-- to remove some default agent completely set it like:
-- agents = { { name = "ChatGPT3-5", disable = true, }, ... },
agents = {
{
name = "ExampleDisabledAgent",
disable = true,
},
{
name = "ChatGPT4o",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "gpt-4o", temperature = 1.1, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = require("gp.defaults").chat_system_prompt,
},
{
provider = "openai",
name = "ChatGPT4o-mini",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "gpt-4o-mini", temperature = 1.1, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = require("gp.defaults").chat_system_prompt,
},
{
provider = "copilot",
name = "ChatCopilot",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "gpt-4o", temperature = 1.1, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = require("gp.defaults").chat_system_prompt,
},
{
provider = "googleai",
name = "ChatGemini",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "gemini-pro", temperature = 1.1, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = require("gp.defaults").chat_system_prompt,
},
{
provider = "pplx",
name = "ChatPerplexityLlama3.1-8B",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "llama-3.1-sonar-small-128k-chat", temperature = 1.1, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = require("gp.defaults").chat_system_prompt,
},
{
provider = "anthropic",
name = "ChatClaude-3-5-Sonnet",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "claude-3-5-sonnet-20240620", temperature = 0.8, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = require("gp.defaults").chat_system_prompt,
},
{
provider = "anthropic",
name = "ChatClaude-3-Haiku",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = { model = "claude-3-haiku-20240307", temperature = 0.8, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = require("gp.defaults").chat_system_prompt,
},
{
provider = "ollama",
name = "ChatOllamaLlama3.1-8B",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = {
model = "llama3.1",
temperature = 0.6,
top_p = 1,
min_p = 0.05,
},
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are a general AI assistant.",
},
{
provider = "lmstudio",
name = "ChatLMStudio",
chat = true,
command = false,
-- string with model name or table with model name and parameters
model = {
model = "dummy",
temperature = 0.97,
top_p = 1,
num_ctx = 8192,
},
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "You are a general AI assistant.",
},
{
provider = "openai",
name = "CodeGPT4o",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "gpt-4o", temperature = 0.8, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = require("gp.defaults").code_system_prompt,
},
{
provider = "openai",
name = "CodeGPT4o-mini",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "gpt-4o-mini", temperature = 0.7, top_p = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = "Please return ONLY code snippets.\nSTART AND END YOUR ANSWER WITH:\n\n```",
},
{
provider = "copilot",
name = "CodeCopilot",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "gpt-4o", temperature = 0.8, top_p = 1, n = 1 },
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = require("gp.defaults").code_system_prompt,
},
{
provider = "googleai",
name = "CodeGemini",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "gemini-pro", temperature = 0.8, top_p = 1 },
system_prompt = require("gp.defaults").code_system_prompt,
},
{
provider = "pplx",
name = "CodePerplexityLlama3.1-8B",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "llama-3.1-sonar-small-128k-chat", temperature = 0.8, top_p = 1 },
system_prompt = require("gp.defaults").code_system_prompt,
},
{
provider = "anthropic",
name = "CodeClaude-3-5-Sonnet",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "claude-3-5-sonnet-20240620", temperature = 0.8, top_p = 1 },
system_prompt = require("gp.defaults").code_system_prompt,
},
{
provider = "anthropic",
name = "CodeClaude-3-Haiku",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = { model = "claude-3-haiku-20240307", temperature = 0.8, top_p = 1 },
system_prompt = require("gp.defaults").code_system_prompt,
},
{
provider = "ollama",
name = "CodeOllamaLlama3.1-8B",
chat = false,
command = true,
-- string with model name or table with model name and parameters
model = {
model = "llama3.1",
temperature = 0.4,
top_p = 1,
min_p = 0.05,
},
-- system prompt (use this to specify the persona/role of the AI)
system_prompt = require("gp.defaults").code_system_prompt,
},
},
-- directory for storing chat files
chat_dir = vim.fn.stdpath("data"):gsub("/$", "") .. "/gp/chats",
-- chat user prompt prefix
chat_user_prefix = "💬:",
-- chat assistant prompt prefix (static string or a table {static, template})
-- first string has to be static, second string can contain template {{agent}}
-- just a static string is legacy and the [{{agent}}] element is added automatically
-- if you really want just a static string, make it a table with one element { "🤖:" }
chat_assistant_prefix = { "🤖:", "[{{agent}}]" },
-- The banner shown at the top of each chat file.
chat_template = require("gp.defaults").chat_template,
-- if you want more real estate in your chat files and don't need the helper text
-- chat_template = require("gp.defaults").short_chat_template,
-- chat topic generation prompt
chat_topic_gen_prompt = "Summarize the topic of our conversation above"
.. " in two or three words. Respond only with those words.",
-- chat topic model (string with model name or table with model name and parameters)
-- explicitly confirm deletion of a chat file
chat_confirm_delete = true,
-- conceal model parameters in chat
chat_conceal_model_params = true,
-- local shortcuts bound to the chat buffer
-- (be careful to choose something which will work across specified modes)
chat_shortcut_respond = { modes = { "n", "i", "v", "x" }, shortcut = "<C-g><C-g>" },
chat_shortcut_delete = { modes = { "n", "i", "v", "x" }, shortcut = "<C-g>d" },
chat_shortcut_stop = { modes = { "n", "i", "v", "x" }, shortcut = "<C-g>s" },
chat_shortcut_new = { modes = { "n", "i", "v", "x" }, shortcut = "<C-g>c" },
-- default search term when using :GpChatFinder
chat_finder_pattern = "topic ",
chat_finder_mappings = {
delete = { modes = { "n", "i", "v", "x" }, shortcut = "<C-d>" },
},
-- if true, finished ChatResponder won't move the cursor to the end of the buffer
chat_free_cursor = false,
-- use prompt buftype for chats (:h prompt-buffer)
chat_prompt_buf_type = false,
-- how to display GpChatToggle or GpContext
---@type "popup" | "split" | "vsplit" | "tabnew"
toggle_target = "vsplit",
-- styling for chatfinder
---@type "single" | "double" | "rounded" | "solid" | "shadow" | "none"
style_chat_finder_border = "single",
-- margins are number of characters or lines
style_chat_finder_margin_bottom = 8,
style_chat_finder_margin_left = 1,
style_chat_finder_margin_right = 2,
style_chat_finder_margin_top = 2,
-- how wide should the preview be, number between 0.0 and 1.0
style_chat_finder_preview_ratio = 0.5,
-- styling for popup
---@type "single" | "double" | "rounded" | "solid" | "shadow" | "none"
style_popup_border = "single",
-- margins are number of characters or lines
style_popup_margin_bottom = 8,
style_popup_margin_left = 1,
style_popup_margin_right = 2,
style_popup_margin_top = 2,
style_popup_max_width = 160,
-- in case of visibility colisions with other plugins, you can increase/decrease zindex
zindex = 49,
-- command config and templates below are used by commands like GpRewrite, GpEnew, etc.
-- command prompt prefix for asking user for input (supports {{agent}} template variable)
command_prompt_prefix_template = "🤖 {{agent}} ~ ",
-- auto select command response (easier chaining of commands)
-- if false it also frees up the buffer cursor for further editing elsewhere
command_auto_select_response = true,
-- templates
template_selection = "I have the following from {{filename}}:"
.. "\n\n```{{filetype}}\n{{selection}}\n```\n\n{{command}}",
template_rewrite = "I have the following from {{filename}}:"
.. "\n\n```{{filetype}}\n{{selection}}\n```\n\n{{command}}"
.. "\n\nRespond exclusively with the snippet that should replace the selection above.",
template_append = "I have the following from {{filename}}:"
.. "\n\n```{{filetype}}\n{{selection}}\n```\n\n{{command}}"
.. "\n\nRespond exclusively with the snippet that should be appended after the selection above.",
template_prepend = "I have the following from {{filename}}:"
.. "\n\n```{{filetype}}\n{{selection}}\n```\n\n{{command}}"
.. "\n\nRespond exclusively with the snippet that should be prepended before the selection above.",
template_command = "{{command}}",
-- https://platform.openai.com/docs/guides/speech-to-text/quickstart
-- Whisper costs $0.006 / minute (rounded to the nearest second)
-- by eliminating silence and speeding up the tempo of the recording
-- we can reduce the cost by 50% or more and get the results faster
whisper = {
-- you can disable whisper completely by whisper = {disable = true}
disable = false,
-- OpenAI audio/transcriptions api endpoint to transcribe audio to text
endpoint = "https://api.openai.com/v1/audio/transcriptions",
-- directory for storing whisper files
store_dir = (os.getenv("TMPDIR") or os.getenv("TEMP") or "/tmp") .. "/gp_whisper",
-- multiplier of RMS level dB for threshold used by sox to detect silence vs speech
-- decibels are negative, the recording is normalized to -3dB =>
-- increase this number to pick up more (weaker) sounds as possible speech
-- decrease this number to pick up only louder sounds as possible speech
-- you can disable silence trimming by setting this a very high number (like 1000.0)
silence = "1.75",
-- whisper tempo (1.0 is normal speed)
tempo = "1.75",
-- The language of the input audio, in ISO-639-1 format.
language = "en",
-- command to use for recording can be nil (unset) for automatic selection
-- string ("sox", "arecord", "ffmpeg") or table with command and arguments:
-- sox is the most universal, but can have start/end cropping issues caused by latency
-- arecord is linux only, but has no cropping issues and is faster
-- ffmpeg in the default configuration is macos only, but can be used on any platform
-- (see https://trac.ffmpeg.org/wiki/Capture/Desktop for more info)
-- below is the default configuration for all three commands:
-- whisper_rec_cmd = {"sox", "-c", "1", "--buffer", "32", "-d", "rec.wav", "trim", "0", "60:00"},
-- whisper_rec_cmd = {"arecord", "-c", "1", "-f", "S16_LE", "-r", "48000", "-d", "3600", "rec.wav"},
-- whisper_rec_cmd = {"ffmpeg", "-y", "-f", "avfoundation", "-i", ":0", "-t", "3600", "rec.wav"},
rec_cmd = nil,
},
-- image generation settings
image = {
-- you can disable image generation logic completely by image = {disable = true}
disable = false,
-- openai api key (string or table with command and arguments)
-- secret = { "cat", "path_to/openai_api_key" },
-- secret = { "bw", "get", "password", "OPENAI_API_KEY" },
-- secret = "sk-...",
-- secret = os.getenv("env_name.."),
-- if missing openai_api_key is used
secret = os.getenv("OPENAI_API_KEY"),
-- image prompt prefix for asking user for input (supports {{agent}} template variable)
prompt_prefix_template = "🖌️ {{agent}} ~ ",
-- image prompt prefix for asking location to save the image
prompt_save = "🖌️💾 ~ ",
-- default folder for saving images
store_dir = (os.getenv("TMPDIR") or os.getenv("TEMP") or "/tmp") .. "/gp_images",
-- default image agents (model + settings)
-- to remove some default agent completely set it like:
-- image.agents = { { name = "DALL-E-3-1024x1792-vivid", disable = true, }, ... },
agents = {
{
name = "ExampleDisabledAgent",
disable = true,
},
{
name = "DALL-E-3-1024x1024-vivid",
model = "dall-e-3",
quality = "standard",
style = "vivid",
size = "1024x1024",
},
{
name = "DALL-E-3-1792x1024-vivid",
model = "dall-e-3",
quality = "standard",
style = "vivid",
size = "1792x1024",
},
{
name = "DALL-E-3-1024x1792-vivid",
model = "dall-e-3",
quality = "standard",
style = "vivid",
size = "1024x1792",
},
{
name = "DALL-E-3-1024x1024-natural",
model = "dall-e-3",
quality = "standard",
style = "natural",
size = "1024x1024",
},
{
name = "DALL-E-3-1792x1024-natural",
model = "dall-e-3",
quality = "standard",
style = "natural",
size = "1792x1024",
},
{
name = "DALL-E-3-1024x1792-natural",
model = "dall-e-3",
quality = "standard",
style = "natural",
size = "1024x1792",
},
{
name = "DALL-E-3-1024x1024-vivid-hd",
model = "dall-e-3",
quality = "hd",
style = "vivid",
size = "1024x1024",
},
{
name = "DALL-E-3-1792x1024-vivid-hd",
model = "dall-e-3",
quality = "hd",
style = "vivid",
size = "1792x1024",
},
{
name = "DALL-E-3-1024x1792-vivid-hd",
model = "dall-e-3",
quality = "hd",
style = "vivid",
size = "1024x1792",
},
{
name = "DALL-E-3-1024x1024-natural-hd",
model = "dall-e-3",
quality = "hd",
style = "natural",
size = "1024x1024",
},
{
name = "DALL-E-3-1792x1024-natural-hd",
model = "dall-e-3",
quality = "hd",
style = "natural",
size = "1792x1024",
},
{
name = "DALL-E-3-1024x1792-natural-hd",
model = "dall-e-3",
quality = "hd",
style = "natural",
size = "1024x1792",
},
},
},
-- example hook functions (see Extend functionality section in the README)
hooks = {
-- GpInspectPlugin provides a detailed inspection of the plugin state
InspectPlugin = function(plugin, params)
local bufnr = vim.api.nvim_create_buf(false, true)
local copy = vim.deepcopy(plugin)
local key = copy.config.openai_api_key or ""
copy.config.openai_api_key = key:sub(1, 3) .. string.rep("*", #key - 6) .. key:sub(-3)
local plugin_info = string.format("Plugin structure:\n%s", vim.inspect(copy))
local params_info = string.format("Command params:\n%s", vim.inspect(params))
local lines = vim.split(plugin_info .. "\n" .. params_info, "\n")
vim.api.nvim_buf_set_lines(bufnr, 0, -1, false, lines)
vim.api.nvim_win_set_buf(0, bufnr)
end,
-- GpInspectLog for checking the log file
InspectLog = function(plugin, params)
local log_file = plugin.config.log_file
local buffer = plugin.helpers.get_buffer(log_file)
if not buffer then
vim.cmd("e " .. log_file)
else
vim.cmd("buffer " .. buffer)
end
end,
-- GpImplement rewrites the provided selection/range based on comments in it
Implement = function(gp, params)
local template = "Having following from {{filename}}:\n\n"
.. "```{{filetype}}\n{{selection}}\n```\n\n"
.. "Please rewrite this according to the contained instructions."
.. "\n\nRespond exclusively with the snippet that should replace the selection above."
local agent = gp.get_command_agent()
gp.logger.info("Implementing selection with agent: " .. agent.name)
gp.Prompt(
params,
gp.Target.rewrite,
agent,
template,
nil, -- command will run directly without any prompting for user input
nil -- no predefined instructions (e.g. speech-to-text from Whisper)
)
end,
-- your own functions can go here, see README for more examples like
-- :GpExplain, :GpUnitTests.., :GpTranslator etc.
-- -- example of making :%GpChatNew a dedicated command which
-- -- opens new chat with the entire current buffer as a context
-- BufferChatNew = function(gp, _)
-- -- call GpChatNew command in range mode on whole buffer
-- vim.api.nvim_command("%" .. gp.config.cmd_prefix .. "ChatNew")
-- end,
-- -- example of adding command which opens new chat dedicated for translation
-- Translator = function(gp, params)
-- local chat_system_prompt = "You are a Translator, please translate between English and Chinese."
-- gp.cmd.ChatNew(params, chat_system_prompt)
--
-- -- -- you can also create a chat with a specific fixed agent like this:
-- -- local agent = gp.get_chat_agent("ChatGPT4o")
-- -- gp.cmd.ChatNew(params, chat_system_prompt, agent)
-- end,
-- -- example of adding command which writes unit tests for the selected code
-- UnitTests = function(gp, params)
-- local template = "I have the following code from {{filename}}:\n\n"
-- .. "```{{filetype}}\n{{selection}}\n```\n\n"
-- .. "Please respond by writing table driven unit tests for the code above."
-- local agent = gp.get_command_agent()
-- gp.Prompt(params, gp.Target.enew, agent, template)
-- end,
-- -- example of adding command which explains the selected code
-- Explain = function(gp, params)
-- local template = "I have the following code from {{filename}}:\n\n"
-- .. "```{{filetype}}\n{{selection}}\n```\n\n"
-- .. "Please respond by explaining the code above."
-- local agent = gp.get_chat_agent()
-- gp.Prompt(params, gp.Target.popup, agent, template)
-- end,
},
}

Usage

Chat commands

:GpChatNew

Open a fresh chat in the current window. It can be either empty or include the visual selection or specified range as context. This command also supports subcommands for layout specification:

  • :GpChatNew vsplit Open a fresh chat in a vertical split window.
  • :GpChatNew split Open a fresh chat in a horizontal split window.
  • :GpChatNew tabnew Open a fresh chat in a new tab.
  • :GpChatNew popup Open a fresh chat in a popup window.

:GpChatPaste

Paste the selection or specified range into the latest chat, simplifying the addition of code from multiple files into a single chat buffer. This command also supports subcommands for layout specification:

  • :GpChatPaste vsplit Paste into the latest chat in a vertical split window.
  • :GpChatPaste split Paste into the latest chat in a horizontal split window.
  • :GpChatPaste tabnew Paste into the latest chat in a new tab.
  • :GpChatPaste popup Paste into the latest chat in a popup window.

:GpChatToggle

Open chat in a toggleable popup window, showing the last active chat or a fresh one with selection or a range as a context. This command also supports subcommands for layout specification:

  • :GpChatToggle vsplit Toggle chat in a vertical split window.
  • :GpChatToggle split Toggle chat in a horizontal split window.
  • :GpChatToggle tabnew Toggle chat in a new tab.
  • :GpChatToggle popup Toggle chat in a popup window.

:GpChatFinder

Open a dialog to search through chats.

:GpChatRespond

Request a new GPT response for the current chat. Usin:GpChatRespond N request a new GPT response with only the last N messages as context, using everything from the end up to the Nth instance of 🗨:.. (N=1 is like asking a question in a new chat).

:GpChatDelete

Delete the current chat. By default requires confirmation before delete, which can be disabled in config using chat_confirm_delete = false,.

Text/Code commands

:GpRewrite

Opens a dialog for entering a prompt. After providing prompt instructions into the dialog, the generated response replaces the current line in normal/insert mode, selected lines in visual mode, or the specified range (e.g., :%GpRewrite applies the rewrite to the entire buffer).

:GpRewrite {prompt} Executes directly with specified {prompt} instructions, bypassing the dialog. Suitable for mapping repetitive tasks to keyboard shortcuts or for automation using headless Neovim via terminal or shell scripts.

:GpAppend

Similar to :GpRewrite, but the answer is added after the current line, visual selection, or range.

:GpPrepend

Similar to :GpRewrite, but the answer is added before the current line, visual selection, or range.

:GpEnew

Similar to :GpRewrite, but the answer is added into a new buffer in the current window.

:GpNew

Similar to :GpRewrite, but the answer is added into a new horizontal split window.

:GpVnew

Similar to :GpRewrite, but the answer is added into a new vertical split window.

:GpTabnew

Similar to :GpRewrite, but the answer is added into a new tab.

:GpPopup

Similar to :GpRewrite, but the answer is added into a pop-up window.

:GpImplement

Example hook command to develop code from comments in a visual selection or specified range.

:GpContext

Provides custom context per repository:

  • opens .gp.md file for a given repository in a toggable window.

  • appends selection/range to the context file when used in visual/range mode.

  • also supports subcommands for layout specification:

    • :GpContext vsplit Open .gp.md in a vertical split window.
    • :GpContext split Open .gp.md in a horizontal split window.
    • :GpContext tabnew Open .gp.md in a new tab.
    • :GpContext popup Open .gp.md in a popup window.
  • refer to Custom Instructions for more details.

Speech commands

:GpWhisper {lang?}

Transcription replaces the current line, visual selection or range in the current buffer. Use your mouth to ask a question in a chat buffer instead of writing it by hand, dictate some comments for the code, notes or even your next novel..

For the rest of the whisper commands, the transcription is used as an editable prompt for the equivalent non whisper command - GpWhisperRewrite dictates instructions for GpRewrite etc.

You can override the default language by setting {lang} with the 2 letter shortname of your language (e.g. "en" for English, "fr" for French etc).

:GpWhisperRewrite

Similar to :GpRewrite, but the prompt instruction dialog uses transcribed spoken instructions.

:GpWhisperAppend

Similar to :GpAppend, but the prompt instruction dialog uses transcribed spoken instructions for adding content after the current line, visual selection, or range.

:GpWhisperPrepend

Similar to :GpPrepend, but the prompt instruction dialog uses transcribed spoken instructions for adding content before the current line, selection, or range.

:GpWhisperEnew

Similar to :GpEnew, but the prompt instruction dialog uses transcribed spoken instructions for opening content in a new buffer within the current window.

:GpWhisperNew

Similar to :GpNew, but the prompt instruction dialog uses transcribed spoken instructions for opening content in a new horizontal split window.

:GpWhisperVnew

Similar to :GpVnew, but the prompt instruction dialog uses transcribed spoken instructions for opening content in a new vertical split window.

:GpWhisperTabnew

Similar to :GpTabnew, but the prompt instruction dialog uses transcribed spoken instructions for opening content in a new tab.

:GpWhisperPopup

Similar to :GpPopup, but the prompt instruction dialog uses transcribed spoken instructions for displaying content in a pop-up window.

Agent commands

:GpNextAgent

Cycles between available agents based on the current buffer (chat agents if current buffer is a chat and command agents otherwise). The agent setting is persisted on disk across Neovim instances.

:GpAgent

Displays currently used agents for chat and command instructions.

:GpAgent XY

Choose a new agent based on its name, listing options based on the current buffer (chat agents if current buffer is a chat and command agents otherwise). The agent setting is persisted on disk across Neovim instances.

Image commands

:GpImage

Opens a dialog for entering a prompt describing wanted images. When the generation is done it opens dialog for storing the image to the disk.

:GpImageAgent

Displays currently used image agent (configuration).

:GpImageAgent XY

Choose a new "image agent" based on its name. In the context of images, agent is basically a configuration for model, image size, quality and so on. The agent setting is persisted on disk across Neovim instances.

Other commands

:GpStop

Stops all currently running responses and jobs.

:GpInspectPlugin

Inspects the GPT prompt plugin object in a new scratch buffer.

GpDone autocommand

Commands like GpRewrite, GpAppend etc. run asynchronously and generate event GpDone, so you can define autocmd (like auto formating) to run when gp finishes:

    vim.api.nvim_create_autocmd({ "User" }, {
        pattern = {"GpDone"},
        callback = function(event)
            print("event fired:\n", vim.inspect(event))
            -- local b = event.buf
            -- DO something
        end,
    })

Custom instructions

By calling :GpContext you can make .gp.md markdown file in a root of a repository. Commands such as :GpRewrite, :GpAppend etc. will respect instructions provided in this file (works better with gpt4, gpt 3.5 doesn't always listen to system commands). For example:

Use ‎C++17.
Use Testify library when writing Go tests.
Use Early return/Guard Clauses pattern to avoid excessive nesting.
...

Here is another example.

Scripting

GpDone event + .gp.md custom instructions provide a possibility to run gp.nvim using headless (neo)vim from terminal or shell script. So you can let gp run edits accross many files if you put it in a loop.

test file:

1
2
3
4
5

.gp.md file:

If user says hello, please respond with:

```
Ahoy there!
```

calling gp.nvim from terminal/script:

  • register autocommand to save and quit nvim when Gp is done
  • second jumps to occurrence of something I want to rewrite/append/prepend to (in this case number 3)
  • selecting the line
  • calling gp.nvim acction
$ nvim --headless -c "autocmd User GpDone wq" -c "/3" -c "normal V" -c "GpAppend hello there"  test

resulting test file:

1
2
3
Ahoy there!
4
5

Shortcuts

There are no default global shortcuts to mess with your own config. Below are examples for you to adjust or just use directly.

Native

You can use the good old vim.keymap.set and paste the following after require("gp").setup(conf) call (or anywhere you keep shortcuts if you want them at one place).

local function keymapOptions(desc)
    return {
        noremap = true,
        silent = true,
        nowait = true,
        desc = "GPT prompt " .. desc,
    }
end

-- Chat commands
vim.keymap.set({"n", "i"}, "<C-g>c", "<cmd>GpChatNew<cr>", keymapOptions("New Chat"))
vim.keymap.set({"n", "i"}, "<C-g>t", "<cmd>GpChatToggle<cr>", keymapOptions("Toggle Chat"))
vim.keymap.set({"n", "i"}, "<C-g>f", "<cmd>GpChatFinder<cr>", keymapOptions("Chat Finder"))

vim.keymap.set("v", "<C-g>c", ":<C-u>'<,'>GpChatNew<cr>", keymapOptions("Visual Chat New"))
vim.keymap.set("v", "<C-g>p", ":<C-u>'<,'>GpChatPaste<cr>", keymapOptions("Visual Chat Paste"))
vim.keymap.set("v", "<C-g>t", ":<C-u>'<,'>GpChatToggle<cr>", keymapOptions("Visual Toggle Chat"))

vim.keymap.set({ "n", "i" }, "<C-g><C-x>", "<cmd>GpChatNew split<cr>", keymapOptions("New Chat split"))
vim.keymap.set({ "n", "i" }, "<C-g><C-v>", "<cmd>GpChatNew vsplit<cr>", keymapOptions("New Chat vsplit"))
vim.keymap.set({ "n", "i" }, "<C-g><C-t>", "<cmd>GpChatNew tabnew<cr>", keymapOptions("New Chat tabnew"))

vim.keymap.set("v", "<C-g><C-x>", ":<C-u>'<,'>GpChatNew split<cr>", keymapOptions("Visual Chat New split"))
vim.keymap.set("v", "<C-g><C-v>", ":<C-u>'<,'>GpChatNew vsplit<cr>", keymapOptions("Visual Chat New vsplit"))
vim.keymap.set("v", "<C-g><C-t>", ":<C-u>'<,'>GpChatNew tabnew<cr>", keymapOptions("Visual Chat New tabnew"))

-- Prompt commands
vim.keymap.set({"n", "i"}, "<C-g>r", "<cmd>GpRewrite<cr>", keymapOptions("Inline Rewrite"))
vim.keymap.set({"n", "i"}, "<C-g>a", "<cmd>GpAppend<cr>", keymapOptions("Append (after)"))
vim.keymap.set({"n", "i"}, "<C-g>b", "<cmd>GpPrepend<cr>", keymapOptions("Prepend (before)"))

vim.keymap.set("v", "<C-g>r", ":<C-u>'<,'>GpRewrite<cr>", keymapOptions("Visual Rewrite"))
vim.keymap.set("v", "<C-g>a", ":<C-u>'<,'>GpAppend<cr>", keymapOptions("Visual Append (after)"))
vim.keymap.set("v", "<C-g>b", ":<C-u>'<,'>GpPrepend<cr>", keymapOptions("Visual Prepend (before)"))
vim.keymap.set("v", "<C-g>i", ":<C-u>'<,'>GpImplement<cr>", keymapOptions("Implement selection"))

vim.keymap.set({"n", "i"}, "<C-g>gp", "<cmd>GpPopup<cr>", keymapOptions("Popup"))
vim.keymap.set({"n", "i"}, "<C-g>ge", "<cmd>GpEnew<cr>", keymapOptions("GpEnew"))
vim.keymap.set({"n", "i"}, "<C-g>gn", "<cmd>GpNew<cr>", keymapOptions("GpNew"))
vim.keymap.set({"n", "i"}, "<C-g>gv", "<cmd>GpVnew<cr>", keymapOptions("GpVnew"))
vim.keymap.set({"n", "i"}, "<C-g>gt", "<cmd>GpTabnew<cr>", keymapOptions("GpTabnew"))

vim.keymap.set("v", "<C-g>gp", ":<C-u>'<,'>GpPopup<cr>", keymapOptions("Visual Popup"))
vim.keymap.set("v", "<C-g>ge", ":<C-u>'<,'>GpEnew<cr>", keymapOptions("Visual GpEnew"))
vim.keymap.set("v", "<C-g>gn", ":<C-u>'<,'>GpNew<cr>", keymapOptions("Visual GpNew"))
vim.keymap.set("v", "<C-g>gv", ":<C-u>'<,'>GpVnew<cr>", keymapOptions("Visual GpVnew"))
vim.keymap.set("v", "<C-g>gt", ":<C-u>'<,'>GpTabnew<cr>", keymapOptions("Visual GpTabnew"))

vim.keymap.set({"n", "i"}, "<C-g>x", "<cmd>GpContext<cr>", keymapOptions("Toggle Context"))
vim.keymap.set("v", "<C-g>x", ":<C-u>'<,'>GpContext<cr>", keymapOptions("Visual Toggle Context"))

vim.keymap.set({"n", "i", "v", "x"}, "<C-g>s", "<cmd>GpStop<cr>", keymapOptions("Stop"))
vim.keymap.set({"n", "i", "v", "x"}, "<C-g>n", "<cmd>GpNextAgent<cr>", keymapOptions("Next Agent"))

-- optional Whisper commands with prefix <C-g>w
vim.keymap.set({"n", "i"}, "<C-g>ww", "<cmd>GpWhisper<cr>", keymapOptions("Whisper"))
vim.keymap.set("v", "<C-g>ww", ":<C-u>'<,'>GpWhisper<cr>", keymapOptions("Visual Whisper"))

vim.keymap.set({"n", "i"}, "<C-g>wr", "<cmd>GpWhisperRewrite<cr>", keymapOptions("Whisper Inline Rewrite"))
vim.keymap.set({"n", "i"}, "<C-g>wa", "<cmd>GpWhisperAppend<cr>", keymapOptions("Whisper Append (after)"))
vim.keymap.set({"n", "i"}, "<C-g>wb", "<cmd>GpWhisperPrepend<cr>", keymapOptions("Whisper Prepend (before) "))

vim.keymap.set("v", "<C-g>wr", ":<C-u>'<,'>GpWhisperRewrite<cr>", keymapOptions("Visual Whisper Rewrite"))
vim.keymap.set("v", "<C-g>wa", ":<C-u>'<,'>GpWhisperAppend<cr>", keymapOptions("Visual Whisper Append (after)"))
vim.keymap.set("v", "<C-g>wb", ":<C-u>'<,'>GpWhisperPrepend<cr>", keymapOptions("Visual Whisper Prepend (before)"))

vim.keymap.set({"n", "i"}, "<C-g>wp", "<cmd>GpWhisperPopup<cr>", keymapOptions("Whisper Popup"))
vim.keymap.set({"n", "i"}, "<C-g>we", "<cmd>GpWhisperEnew<cr>", keymapOptions("Whisper Enew"))
vim.keymap.set({"n", "i"}, "<C-g>wn", "<cmd>GpWhisperNew<cr>", keymapOptions("Whisper New"))
vim.keymap.set({"n", "i"}, "<C-g>wv", "<cmd>GpWhisperVnew<cr>", keymapOptions("Whisper Vnew"))
vim.keymap.set({"n", "i"}, "<C-g>wt", "<cmd>GpWhisperTabnew<cr>", keymapOptions("Whisper Tabnew"))

vim.keymap.set("v", "<C-g>wp", ":<C-u>'<,'>GpWhisperPopup<cr>", keymapOptions("Visual Whisper Popup"))
vim.keymap.set("v", "<C-g>we", ":<C-u>'<,'>GpWhisperEnew<cr>", keymapOptions("Visual Whisper Enew"))
vim.keymap.set("v", "<C-g>wn", ":<C-u>'<,'>GpWhisperNew<cr>", keymapOptions("Visual Whisper New"))
vim.keymap.set("v", "<C-g>wv", ":<C-u>'<,'>GpWhisperVnew<cr>", keymapOptions("Visual Whisper Vnew"))
vim.keymap.set("v", "<C-g>wt", ":<C-u>'<,'>GpWhisperTabnew<cr>", keymapOptions("Visual Whisper Tabnew"))

Whichkey

Or go more fancy by using which-key.nvim plugin:

require("which-key").add({
    -- VISUAL mode mappings
    -- s, x, v modes are handled the same way by which_key
    {
        mode = { "v" },
        nowait = true,
        remap = false,
        { "<C-g><C-t>", ":<C-u>'<,'>GpChatNew tabnew<cr>", desc = "ChatNew tabnew" },
        { "<C-g><C-v>", ":<C-u>'<,'>GpChatNew vsplit<cr>", desc = "ChatNew vsplit" },
        { "<C-g><C-x>", ":<C-u>'<,'>GpChatNew split<cr>", desc = "ChatNew split" },
        { "<C-g>a", ":<C-u>'<,'>GpAppend<cr>", desc = "Visual Append (after)" },
        { "<C-g>b", ":<C-u>'<,'>GpPrepend<cr>", desc = "Visual Prepend (before)" },
        { "<C-g>c", ":<C-u>'<,'>GpChatNew<cr>", desc = "Visual Chat New" },
        { "<C-g>g", group = "generate into new .." },
        { "<C-g>ge", ":<C-u>'<,'>GpEnew<cr>", desc = "Visual GpEnew" },
        { "<C-g>gn", ":<C-u>'<,'>GpNew<cr>", desc = "Visual GpNew" },
        { "<C-g>gp", ":<C-u>'<,'>GpPopup<cr>", desc = "Visual Popup" },
        { "<C-g>gt", ":<C-u>'<,'>GpTabnew<cr>", desc = "Visual GpTabnew" },
        { "<C-g>gv", ":<C-u>'<,'>GpVnew<cr>", desc = "Visual GpVnew" },
        { "<C-g>i", ":<C-u>'<,'>GpImplement<cr>", desc = "Implement selection" },
        { "<C-g>n", "<cmd>GpNextAgent<cr>", desc = "Next Agent" },
        { "<C-g>p", ":<C-u>'<,'>GpChatPaste<cr>", desc = "Visual Chat Paste" },
        { "<C-g>r", ":<C-u>'<,'>GpRewrite<cr>", desc = "Visual Rewrite" },
        { "<C-g>s", "<cmd>GpStop<cr>", desc = "GpStop" },
        { "<C-g>t", ":<C-u>'<,'>GpChatToggle<cr>", desc = "Visual Toggle Chat" },
        { "<C-g>w", group = "Whisper" },
        { "<C-g>wa", ":<C-u>'<,'>GpWhisperAppend<cr>", desc = "Whisper Append" },
        { "<C-g>wb", ":<C-u>'<,'>GpWhisperPrepend<cr>", desc = "Whisper Prepend" },
        { "<C-g>we", ":<C-u>'<,'>GpWhisperEnew<cr>", desc = "Whisper Enew" },
        { "<C-g>wn", ":<C-u>'<,'>GpWhisperNew<cr>", desc = "Whisper New" },
        { "<C-g>wp", ":<C-u>'<,'>GpWhisperPopup<cr>", desc = "Whisper Popup" },
        { "<C-g>wr", ":<C-u>'<,'>GpWhisperRewrite<cr>", desc = "Whisper Rewrite" },
        { "<C-g>wt", ":<C-u>'<,'>GpWhisperTabnew<cr>", desc = "Whisper Tabnew" },
        { "<C-g>wv", ":<C-u>'<,'>GpWhisperVnew<cr>", desc = "Whisper Vnew" },
        { "<C-g>ww", ":<C-u>'<,'>GpWhisper<cr>", desc = "Whisper" },
        { "<C-g>x", ":<C-u>'<,'>GpContext<cr>", desc = "Visual GpContext" },
    },

    -- NORMAL mode mappings
    {
        mode = { "n" },
        nowait = true,
        remap = false,
        { "<C-g><C-t>", "<cmd>GpChatNew tabnew<cr>", desc = "New Chat tabnew" },
        { "<C-g><C-v>", "<cmd>GpChatNew vsplit<cr>", desc = "New Chat vsplit" },
        { "<C-g><C-x>", "<cmd>GpChatNew split<cr>", desc = "New Chat split" },
        { "<C-g>a", "<cmd>GpAppend<cr>", desc = "Append (after)" },
        { "<C-g>b", "<cmd>GpPrepend<cr>", desc = "Prepend (before)" },
        { "<C-g>c", "<cmd>GpChatNew<cr>", desc = "New Chat" },
        { "<C-g>f", "<cmd>GpChatFinder<cr>", desc = "Chat Finder" },
        { "<C-g>g", group = "generate into new .." },
        { "<C-g>ge", "<cmd>GpEnew<cr>", desc = "GpEnew" },
        { "<C-g>gn", "<cmd>GpNew<cr>", desc = "GpNew" },
        { "<C-g>gp", "<cmd>GpPopup<cr>", desc = "Popup" },
        { "<C-g>gt", "<cmd>GpTabnew<cr>", desc = "GpTabnew" },
        { "<C-g>gv", "<cmd>GpVnew<cr>", desc = "GpVnew" },
        { "<C-g>n", "<cmd>GpNextAgent<cr>", desc = "Next Agent" },
        { "<C-g>r", "<cmd>GpRewrite<cr>", desc = "Inline Rewrite" },
        { "<C-g>s", "<cmd>GpStop<cr>", desc = "GpStop" },
        { "<C-g>t", "<cmd>GpChatToggle<cr>", desc = "Toggle Chat" },
        { "<C-g>w", group = "Whisper" },
        { "<C-g>wa", "<cmd>GpWhisperAppend<cr>", desc = "Whisper Append (after)" },
        { "<C-g>wb", "<cmd>GpWhisperPrepend<cr>", desc = "Whisper Prepend (before)" },
        { "<C-g>we", "<cmd>GpWhisperEnew<cr>", desc = "Whisper Enew" },
        { "<C-g>wn", "<cmd>GpWhisperNew<cr>", desc = "Whisper New" },
        { "<C-g>wp", "<cmd>GpWhisperPopup<cr>", desc = "Whisper Popup" },
        { "<C-g>wr", "<cmd>GpWhisperRewrite<cr>", desc = "Whisper Inline Rewrite" },
        { "<C-g>wt", "<cmd>GpWhisperTabnew<cr>", desc = "Whisper Tabnew" },
        { "<C-g>wv", "<cmd>GpWhisperVnew<cr>", desc = "Whisper Vnew" },
        { "<C-g>ww", "<cmd>GpWhisper<cr>", desc = "Whisper" },
        { "<C-g>x", "<cmd>GpContext<cr>", desc = "Toggle GpContext" },
    },

    -- INSERT mode mappings
    {
        mode = { "i" },
        nowait = true,
        remap = false,
        { "<C-g><C-t>", "<cmd>GpChatNew tabnew<cr>", desc = "New Chat tabnew" },
        { "<C-g><C-v>", "<cmd>GpChatNew vsplit<cr>", desc = "New Chat vsplit" },
        { "<C-g><C-x>", "<cmd>GpChatNew split<cr>", desc = "New Chat split" },
        { "<C-g>a", "<cmd>GpAppend<cr>", desc = "Append (after)" },
        { "<C-g>b", "<cmd>GpPrepend<cr>", desc = "Prepend (before)" },
        { "<C-g>c", "<cmd>GpChatNew<cr>", desc = "New Chat" },
        { "<C-g>f", "<cmd>GpChatFinder<cr>", desc = "Chat Finder" },
        { "<C-g>g", group = "generate into new .." },
        { "<C-g>ge", "<cmd>GpEnew<cr>", desc = "GpEnew" },
        { "<C-g>gn", "<cmd>GpNew<cr>", desc = "GpNew" },
        { "<C-g>gp", "<cmd>GpPopup<cr>", desc = "Popup" },
        { "<C-g>gt", "<cmd>GpTabnew<cr>", desc = "GpTabnew" },
        { "<C-g>gv", "<cmd>GpVnew<cr>", desc = "GpVnew" },
        { "<C-g>n", "<cmd>GpNextAgent<cr>", desc = "Next Agent" },
        { "<C-g>r", "<cmd>GpRewrite<cr>", desc = "Inline Rewrite" },
        { "<C-g>s", "<cmd>GpStop<cr>", desc = "GpStop" },
        { "<C-g>t", "<cmd>GpChatToggle<cr>", desc = "Toggle Chat" },
        { "<C-g>w", group = "Whisper" },
        { "<C-g>wa", "<cmd>GpWhisperAppend<cr>", desc = "Whisper Append (after)" },
        { "<C-g>wb", "<cmd>GpWhisperPrepend<cr>", desc = "Whisper Prepend (before)" },
        { "<C-g>we", "<cmd>GpWhisperEnew<cr>", desc = "Whisper Enew" },
        { "<C-g>wn", "<cmd>GpWhisperNew<cr>", desc = "Whisper New" },
        { "<C-g>wp", "<cmd>GpWhisperPopup<cr>", desc = "Whisper Popup" },
        { "<C-g>wr", "<cmd>GpWhisperRewrite<cr>", desc = "Whisper Inline Rewrite" },
        { "<C-g>wt", "<cmd>GpWhisperTabnew<cr>", desc = "Whisper Tabnew" },
        { "<C-g>wv", "<cmd>GpWhisperVnew<cr>", desc = "Whisper Vnew" },
        { "<C-g>ww", "<cmd>GpWhisper<cr>", desc = "Whisper" },
        { "<C-g>x", "<cmd>GpContext<cr>", desc = "Toggle GpContext" },
    },
})

Extend functionality

You can extend/override the plugin functionality with your own, by putting functions into config.hooks. Hooks have access to everything (see InspectPlugin example in defaults) and are automatically registered as commands (GpInspectPlugin).

Here are some more examples:

  • :GpUnitTests

    -- example of adding command which writes unit tests for the selected code
    UnitTests = function(gp, params)
        local template = "I have the following code from {{filename}}:\n\n"
            .. "```{{filetype}}\n{{selection}}\n```\n\n"
            .. "Please respond by writing table driven unit tests for the code above."
        local agent = gp.get_command_agent()
        gp.Prompt(params, gp.Target.vnew, agent, template)
    end,
  • :GpExplain

    -- example of adding command which explains the selected code
    Explain = function(gp, params)
        local template = "I have the following code from {{filename}}:\n\n"
            .. "```{{filetype}}\n{{selection}}\n```\n\n"
            .. "Please respond by explaining the code above."
        local agent = gp.get_chat_agent()
        gp.Prompt(params, gp.Target.popup, agent, template)
    end,
  • :GpCodeReview

    -- example of usig enew as a function specifying type for the new buffer
    CodeReview = function(gp, params)
        local template = "I have the following code from {{filename}}:\n\n"
            .. "```{{filetype}}\n{{selection}}\n```\n\n"
            .. "Please analyze for code smells and suggest improvements."
        local agent = gp.get_chat_agent()
        gp.Prompt(params, gp.Target.enew("markdown"), agent, template)
    end,
  • :GpTranslator

    -- example of adding command which opens new chat dedicated for translation
    Translator = function(gp, params)
        local chat_system_prompt = "You are a Translator, please translate between English and Chinese."
        gp.cmd.ChatNew(params, chat_system_prompt)
    
        -- -- you can also create a chat with a specific fixed agent like this:
        -- local agent = gp.get_chat_agent("ChatGPT4o")
        -- gp.cmd.ChatNew(params, chat_system_prompt, agent)
    end,
  • :GpBufferChatNew

    -- example of making :%GpChatNew a dedicated command which
    -- opens new chat with the entire current buffer as a context
    BufferChatNew = function(gp, _)
        -- call GpChatNew command in range mode on whole buffer
        vim.api.nvim_command("%" .. gp.config.cmd_prefix .. "ChatNew")
    end,

The raw plugin text editing method Prompt has following signature:

---@param params table  # vim command parameters such as range, args, etc.
---@param target integer | function | table  # where to put the response
---@param agent table  # obtained from get_command_agent or get_chat_agent
---@param template string  # template with model instructions
---@param prompt string | nil  # nil for non interactive commads
---@param whisper string | nil  # predefined input (e.g. obtained from Whisper)
---@param callback function | nil  # callback(response) after completing the prompt
Prompt(params, target, agent, template, prompt, whisper, callback)
  • params is a table passed to neovim user commands, Prompt currently uses:

    • range, line1, line2 to work with ranges
    • args so instructions can be passed directly after command (:GpRewrite something something)
    params = {
          args = "",
          bang = false,
          count = -1,
          fargs = {},
          line1 = 1352,
          line2 = 1352,
          mods = "",
          name = "GpChatNew",
          range = 0,
          reg = "",
          smods = {
                browse = false,
                confirm = false,
                emsg_silent = false,
                hide = false,
                horizontal = false,
                keepalt = false,
                keepjumps = false,
                keepmarks = false,
                keeppatterns = false,
                lockmarks = false,
                noautocmd = false,
                noswapfile = false,
                sandbox = false,
                silent = false,
                split = "",
                tab = -1,
                unsilent = false,
                verbose = -1,
                vertical = false
          }
    }
  • target specifying where to direct GPT response

    • enew/new/vnew/tabnew can be used as a function so you can pass in a filetype for the new buffer (enew/enew()/enew("markdown")/..)
    M.Target = {
        rewrite = 0, -- for replacing the selection, range or the current line
        append = 1, -- for appending after the selection, range or the current line
        prepend = 2, -- for prepending before the selection, range or the current line
        popup = 3, -- for writing into the popup window
    
        -- for writing into a new buffer
        ---@param filetype nil | string # nil = same as the original buffer
        ---@return table # a table with type=4 and filetype=filetype
        enew = function(filetype)
            return { type = 4, filetype = filetype }
        end,
    
        --- for creating a new horizontal split
        ---@param filetype nil | string # nil = same as the original buffer
        ---@return table # a table with type=5 and filetype=filetype
        new = function(filetype)
            return { type = 5, filetype = filetype }
        end,
    
        --- for creating a new vertical split
        ---@param filetype nil | string # nil = same as the original buffer
        ---@return table # a table with type=6 and filetype=filetype
        vnew = function(filetype)
            return { type = 6, filetype = filetype }
        end,
    
        --- for creating a new tab
        ---@param filetype nil | string # nil = same as the original buffer
        ---@return table # a table with type=7 and filetype=filetype
        tabnew = function(filetype)
            return { type = 7, filetype = filetype }
        end,
    }
  • agent table obtainable via get_command_agent and get_chat_agent methods which have following signature:

    ---@param name string | nil
    ---@return table # { cmd_prefix, name, model, system_prompt, provider }
    get_command_agent(name)
  • template

    • template of the user message send to gpt

    • string can include variables below:

      name Description
      {{filetype}} filetype of the current buffer
      {{selection}} last or currently selected text
      {{command}} instructions provided by the user
  • prompt

    • string used similarly as bash/zsh prompt in terminal, when plugin asks for user command to gpt.
    • if nil, user is not asked to provide input (for specific predefined commands - document this, explain that, write tests ..)
    • simple 🤖 ~ might be used or you could use different msg to convey info about the method which is called
      (🤖 rewrite ~, 🤖 popup ~, 🤖 enew ~, 🤖 inline ~, etc.)
  • whisper

    • optional string serving as a default for input prompt (for example generated from speech by Whisper)
  • callback

    • optional callback function allowing post processing logic on the prompt response
      (for example letting the model to generate commit message and using the callback to make actual commit)