Skip to content

Releases: yaroslavyaroslav/OpenAI-sublime-text

4.2.0

05 Oct 19:27
Compare
Choose a tag to compare

Featues

  • New in buffer mode phantom
  • stream toggle for responses brought back
  • images handling UX improved
  • advertisement logic improved

Deprecated

  • append, replace, insert in prompt modes is deprecated and will be removed in 5.0 release.
  • mode: chat_completion attribute of plugin commands (i.e. "command": "openai", "args": { "mode": "chat_completion" }), as it's actually the only mode to communicate with the llm. The rest of the modes (e.g. handle_image_input, reset_chat_history, refresh_output_panel, create_new_tab) are preserved.

Detaied description

Phantom mode

Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view.

  1. You can set "prompt_mode": "phantom" for AI assistant in its settings.
  2. [optional] Select some text to pass in context in to manipulate with.
  3. Hit OpenAI: New Message or OpenAI: Chat Model Select and ask whatever you'd like in popup input pane.
  4. Phantom will appear below the cursor position or the beginning of the selection while the streaming LLM answer occurs.
  5. You can apply actions to the llm prompt, they're quite self descriptive and follows behavior deprecated in buffer commands.
  6. You can hit ctrl+c to stop prompting same as with in panel mode.

Stream toggle

You can toggle streaming behavior of a model response with "stream": false setting in per assistant basis. That's pretty much it, the default value is true.

Images handling UX improved

Images paths can now be fetched from the clipboard in addition to be extracted from the selection in a given view. It could be either a single image path [and nothing more than that] or a list of such paths separated with a new line, e.g. /Users/username/Documents/Project/image0.png\n/Users/username/Documents/Project/image1.png.

Please note the parser that is trying to deduct whether the content of your clipboard is an [list of] image[s] is made by AI and quite fragile, so don't expect too much from it.

Advertisement logic improvement

Advertisements now appear only when users excessively utilize the plugin, such as by processing too many tokens or sending/receiving an excessive number of messages.

Full Changelog: 4.1.0...4.2.0

4.1.0

26 Aug 11:03
Compare
Choose a tag to compare

4.1.0

Features

  • Image handle support added (model support required for that feature, currently I've tested it just with gpt-4o[-mini] models), can be called by OpenAI: Handle Image command.

It expects an absolute path of image to be selected in a buffer on the command call (smth like /Users/username/Documents/Project/image.png). In addition command can be passed by input panel to proceed the image with special treatment. png and jpg images are only supported.

Warning

Userflow don't expects that image url would be passed by that input panel input, it has to be selected. I'm aware about the UX quality of this feature, but yet I'm too lazy to develop it further in some better state.

Note

I think this plugin is its finite state. Meaning there's no further development of it I have in plans. I still have plans to fix bugs if any, but that tons of little enhancement that could be applied here to fix minor issues and roughness here and there likely never would.
What I do have in plans is to implement ST front end for plandex tool based on some parts of this plugin codebase, to get (and to bring) a fancy and powerful agentish capabilities to ST ecosystem. So stay tuned.

4.0.0 Features

Well, you know all of them:

  1. Dedicated history and assistant settings instances for projects.
  2. Multiple files selection to send to a server as a context of a request.
  3. Tokens [approximate] calculation.
  4. Tab presentation of a chat and all the features coming along with it for free:
    • a text search,
    • symbols fast navigation panel
    • a super+s chat saving saving
    • view presentation setup options (see in Default Settings for details).
  5. Ability to seamlessly mix different service providers within a single session, e.g. seamlessly switching from gpt-4-turbo to llama-3-8b running locally for a one request and back for another.

Claim

This is a really huge update. So is you're happy with it, this is the place and time to donate me.

Breaking changes

output.OpenAI Chat renamed to output.AI Chat, please update your output panel binding to handle this change.

Details

Dedicated history and settings

This is it, you can set up to treat a given project separately, thus give it an abilities of a dedicated chat history and settings/model. To make it even more exiting — you can use entirely different llm providers for a different projects.

To set things up you, please follow the manual in Readme

Multiple files selection

Now you can send whole files to a server. UX for that is not that straightforward and I wanted to, but this is what we're working with 💁🏻‍♂️.

So basically you have to make active all the tabs that you want to append to a request and run [New Message|Chat Model] with Sheets command. It looks quite wired with more than 4 tabs selected in the meantime, but it's still convenient in some way, means you're able to preview the content of each file to be sent beforehand.

Pay attention, it should be exact Tab, and all of those tabs should be places within a single Group.

To see detailed example, please follow the manual in Readme

Tokens [approximate] calculation

Finally! The author have studied calculus and now he's able to divide a number by 4. Such a science!

Jokes aside, this implementation is quite raw, but it's still brings the knowledge of how much you've sent to a server already during the session, and to estimate how much you will send there with a very next request (hint: it's a left value).

This feature comes really useful when you're working with expensive models like GPT-4, believe me, I've implemented this when I spent 40$ for a few days on it by blindingly selecting bunch of big files and sending them to a server.

The tokens amounts shows automatically when AI Chat tab is active and hidden otherwise.

Tab representation of chat

Output panel capabilities is limited in ST, it has lack of navigation within its content. So now you can switch to a tab, to get for free all the power that ordinary ST tab has. Full text search, heading navigation, one keystroke chat saving.

To achieve that type "Open in Tab" in command palette and that's it. I strictly recommend you to use it in a separate view group (as it's on the main screenshot in repo) to get the best of it, but you're not forced to.

There are new bunch of presentation settings have appeared because of this, to maximize the content space and to reduce all unnecessary stuff. In short you can toggle of lines number display and the whole gutter itself. For more details please look into default settings documentation.

Hot model with a chat switching

It's as simple as powerful. You are able to change models and even services on the fly. You can start your chat with gpt-4-turbo model, then to switch to your local llama3 instance, then to whatever third-party service, than back to gpt-4-turbo.

And you can achieve that in a straightforward and convenient way: simply by picking another predefined model from the "OpenAI: Chat Model Select" over and over again.

Important

If have so much time to read that far, you certainly too reach and careless, thus have to donate me.

Full Changelog: 4.0.1...4.1.0

4.0.0

07 May 20:11
Compare
Choose a tag to compare

Features

Well, you know all of them:

  1. Dedicated history and assistant settings instances for projects.
  2. Multiple files selection to send to a server as a context of a request.
  3. Tokens [approximate] calculation.
  4. Tab presentation of a chat and all the features coming along with it for free:
    • a text search,
    • symbols fast navigation panel
    • a super+s chat saving saving
    • view presentation setup options (see in Default Settings for details).
  5. Ability to seamlessly mix different service providers within a single session, e.g. seamlessly switching from gpt-4-turbo to llama-3-8b running locally for a one request and back for another.

Claim

Important

This is a really huge update. So is you're happy with it, this is the place and time to donate me.

Breaking changes

output.OpenAI Chat renamed to output.AI Chat, please update your output panel binding to handle this change.

Details

Dedicated history and settings

This is it, you can set up to treat a given project separately, thus give it an abilities of a dedicated chat history and settings/model. To make it even more exiting — you can use entirely different llm providers for a different projects.

To set things up you, please follow the manual in Readme

Multiple files selection

Now you can send whole files to a server. UX for that is not that straightforward and I wanted to, but this is what we're working with 💁🏻‍♂️.

So basically you have to make active all the tabs that you want to append to a request and run [New Message|Chat Model] with Sheets command. It looks quite wired with more than 4 tabs selected in the meantime, but it's still convenient in some way, means you're able to preview the content of each file to be sent beforehand.

Pay attention, it should be exact Tab, and all of those tabs should be places within a single Group.

To see detailed example, please follow the manual in Readme

Tokens [approximate] calculation

Finally! The author have studied calculus and now he's able to divide a number by 4. Such a science!

Jokes aside, this implementation is quite raw, but it's still brings the knowledge of how much you've sent to a server already during the session, and to estimate how much you will send there with a very next request (hint: it's a left value).

This feature comes really useful when you're working with expensive models like GPT-4, believe me, I've implemented this when I spent 40$ for a few days on it by blindingly selecting bunch of big files and sending them to a server.

The tokens amounts shows automatically when AI Chat tab is active and hidden otherwise.

Tab representation of chat

Output panel capabilities is limited in ST, it has lack of navigation within its content. So now you can switch to a tab, to get for free all the power that ordinary ST tab has. Full text search, heading navigation, one keystroke chat saving.

To achieve that type "Open in Tab" in command palette and that's it. I strictly recommend you to use it in a separate view group (as it's on the main screenshot in repo) to get the best of it, but you're not forced to.

There are new bunch of presentation settings have appeared because of this, to maximize the content space and to reduce all unnecessary stuff. In short you can toggle of lines number display and the whole gutter itself. For more details please look into default settings documentation.

Hot model with a chat switching

It's as simple as powerful. You are able to change models and even services on the fly. You can start your chat with gpt-4-turbo model, then to switch to your local llama3 instance, then to whatever third-party service, than back to gpt-4-turbo.

And you can achieve that in a straightforward and convenient way: simply by picking another predefined model from the "OpenAI: Chat Model Select" over and over again.

Important

If have so much time to read that far, you certainly too reach and careless, thus have to donate me.

Chat completion streaming support

07 Jul 21:21
Compare
Choose a tag to compare

Features

  • Completion streaming support.
  • Drop the 2 farthest replies from the plugin cache dialogue.

Completion streaming support.

Yep, you've heard it right. That new cool shiny way that you see in the original OpenAI Chat now comes to Sublime. Embrace, behold and all that stuff. Jokes aside — this thing only makes GPT-4 completion workable, by releasing the most significant tradeoff it has — long answering time. I mean GPT-4 answering time is still the same, but now you starting to see it up to 20 seconds earlier which is matters in terms of UX.

Drop the 2 farthest replies from the plugin cache dialogue.

Now if you reach the context window limit, you're getting asked whether you or not wish to delete the 2 farthest messages (1 yours and 1 from the assistant) to shorter the chat history. If yes, the plugin would drop them and resend all the other chat history to OpenAI servers once again. This thing is recursive and will spit the popup in your face until the chat history would fit within a given model context window again. On cancel it will do nothing, as expected.

PS: As usual, if you have any issues feel free to open an issue here.
PS2: If you feel happy with this plugin you can drop me some coins for paying my OpenAI bills on Ethereum here (including L2 chains): 0x60843b4026Ff630b36835a8b78561eDD559ab208.

Full Changelog: 2.0.5...2.1.0

ChatGPT support

25 Apr 19:59
Compare
Choose a tag to compare

What's new

  • ChatGPT mode support.
  • [Multi]Markdown syntax with syntax highlight support (ChatGPT mode only).
  • Proxy support.
  • GPT-4 support.

ChatGPT mode

ChatGPT mode works the following way:

  1. Run the OpenAI: New Message command
  2. Wait until OpenAI performs a response (be VERY patient in the case of the GPT-4 model it's way slower than you could imagine).
  3. On the Response plugin opens the OpenAI completion output panel with the whole log of your chat at [any] active Window.
  4. If you would like to fetch chat history to another window manually, you can do that by running the OpenAI: Refresh Chat command.
  5. When you're done or want to start all over you should run the OpenAI: Reset Chat History command, which deletes the chat cache.

You can bind both of the most usable commands OpenAI: New Message and OpenAI: Show output panel, to do that please follow Settings->Package Control->OpenAI completion->Key Bindings.

As for now there's just a single history instance. I guess this limitation would disappear sometime, but highly likely it wouldn't be soon.

[Multi]Markdown syntax with syntax highlight support (ChatGPT mode only).

ChatGPT output panel supports markdown syntax highlight. It should just work (if it's not please report an issue).

Although it's highly recommended to install the MultimarkdownEditing to apply syntax highlighting for code snippets provided by ChatGPT. OpenAI completion should just pick it up implicitly for the output panel content.

Proxy support

That's it. Now you can set up a proxy for this plugin.
You can setup it up by overriding the proxy property in the OpenAI completion settings like follow:

"proxy": {
    "address": "127.0.0.1",
    "port": 9898
}

GPT-4 support

It should just work, just set the chat_model setting to GPT-4. Please be patient while working with it. (1) It's very slow and an answer would appear only after it finishes its prompt. It could take up to 10 seconds easily.

Disclaimer

Unfortunately, this version hasn't got covered with comprehensive testing, so there could be bugs. Please report them, so I'd be happy to release a patch.

Full Changelog: 1.1.4...2.0.0