Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[KV Cache Interface] Text Generation & Decoder Engine Implementation #1089

Merged
merged 101 commits into from
Jun 28, 2023

Conversation

dbogunowicz
Copy link
Contributor

@dbogunowicz dbogunowicz commented Jun 22, 2023

Testing plan:

No-cache inference

from deepsparse import Pipeline
import time
start = time.time()
opt = Pipeline.create(task="opt",
                      model_path="/home/ubuntu/damian/sparseml/deployment",
                      engine_type = "onnxruntime",
                      max_generated_tokens=1)
prompt = "Who is the president of the United States?"
output = opt(sequences=prompt, return_logits=True)
sequences=['\n'] logits=array([[[-12.644863 , -12.9746065,   2.577626 , ..., -13.5366125,
         -13.376596 , -14.587112 ]]], dtype=float32) session_id=None # same as in pytorch inference
Ground truth: [-12.6449, -12.9746,   2.5776,  ..., -13.5366, -13.3766, -14.5871]

Single-token engine decoding only:

from deepsparse import Pipeline
opt = Pipeline.create(task="opt",
                      model_path="/home/ubuntu/damian/sparseml/deployment",
                      engine_type = "onnxruntime",
                      max_generated_tokens=128)
prompt = "Who is the president of the United States?"
output = opt(sequences=prompt)
print(output.sequences)
2023-06-27 07:55:20 deepsparse.transformers.engines.nl_decoder_engine INFO     Overwriting in-place the input shapes of the transformer model at /home/ubuntu/damian/sparseml/deployment/model.onnx
2023-06-27 07:55:24 deepsparse.utils.onnx INFO     Overwriting in-place the batch size of the model at /home/ubuntu/damian/sparseml/deployment/model.onnx
2023-06-27 07:56:37 deepsparse.transformers.engines.nl_decoder_engine INFO     Overwriting in-place the input shapes of the transformer model at /home/ubuntu/damian/sparseml/deployment/model.onnx
2023-06-27 07:56:40 deepsparse.utils.onnx INFO     Overwriting in-place the batch size of the model at /home/ubuntu/damian/sparseml/deployment/model.onnx
['\n\nThe president of the United States is the head of the executive branch of government. The president is the head of the executive branch of government, and the president is the head of the executive branch of government. The president is the head of the executive branch of government, and the president is the head of the executive branch of government.\n\nThe president is the head of the executive branch of government, and the president is the head of the executive branch of government. The president is the head of the executive branch of government, and the president is the head of the executive branch of government. The president is the head of the executive']
Ground truth: The president of the United States is the head of the executive branch of government. The president is the head of the executive branch of government, and the president is the head of the executive branch of government. The president is the head of the executive branch of government, and the president is the head of the executive branch of government.

The president is the head of the executive branch of government, and the president is the head of the executive branch of government. The president is the head of the executive branch of government, and the president is the head of the executive branch of government.

Single-token engine and multi-token engine decoding:

from deepsparse import Pipeline
opt = Pipeline.create(task="opt",
                      model_path="/home/ubuntu/damian/sparseml/deployment",
                      engine_type = "onnxruntime",
                      max_generated_tokens=128)
prompt = "Who is the president of the United States?" * 20
output = opt(sequences=prompt)
print(output.sequences)
2023-06-27 07:57:53 deepsparse.transformers.engines.nl_decoder_engine INFO     Overwriting in-place the input shapes of the transformer model at /home/ubuntu/damian/sparseml/deployment/model.onnx
2023-06-27 07:58:47 deepsparse.utils.onnx INFO     Overwriting in-place the batch size of the model at /home/ubuntu/damian/sparseml/deployment/model.onnx
2023-06-27 07:58:52 deepsparse.transformers.engines.nl_decoder_engine INFO     Overwriting in-place the input shapes of the transformer model at /home/ubuntu/damian/sparseml/deployment/model.onnx
2023-06-27 07:58:58 deepsparse.utils.onnx INFO     Overwriting in-place the batch size of the model at /home/ubuntu/damian/sparseml/deployment/model.onnx
['Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is']
Ground truth: Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?Who is the president of the United States?

dbogunowicz and others added 30 commits June 5, 2023 15:55
* initial commit

* coreys simplifications

* finishing the second model static

* ready, time for beautification

* ready for review

* moved the code to examples

* fix eos logic

* add argument num_tokens_to_generate
* initial commit

* coreys simplifications

* finishing the second model static

* ready, time for beautification

* ready for review

* moved the code to examples

* fix eos logic

* add argument num_tokens_to_generate

* initial commit

* change order

* Update examples/codegen/README.md

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

---------

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>
Copy link
Member

@bfineran bfineran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great overall very clean implementation - see comments. I'm most concerned about how we only allow an engine to contain a single cache and single session. this will not scale. the design looks simple enough that we should probably just expand do basic multiple session support now (use dict as suggested in comment, etc).

additionally let's see what we can do in terms of testing

src/deepsparse/transformers/engines/nl_decoder_engine.py Outdated Show resolved Hide resolved
:param engine: The `NLDecoderEngine` to transfer the kv cache state
from
"""
state = engine.kv_cache.cached_inputs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's have this function take the engine KVCache directly (again this is because we will have multiple cache objects per engine once we're in multi session/multi stream)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, but this only works nicely if KVCache tracks the amount of cache that it has processed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreement: each KVCache session tracks the number of processed tokens

use_deepsparse_cache=use_deepsparse_cache,
)

if self.multitoken_engine.kv_cache_enabled:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems like we're using KV cache enabled as a proxy for if we want to generate multiple tokens or not, no? seems like we might want to make this a separate, explicit control

sequence of generated tokens and a sequence
of logits for each generated token
"""
if not self.multitoken_engine.kv_cache_enabled:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, we should be just use max generated tokens

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if not self.multitoken_engine.kv_cache_enabled and self.max_generated_tokens != 1: error

Base automatically changed from feature/damian/kv_cache_ort to feature/damian/fb_kv_cache June 28, 2023 10:23
@dbogunowicz dbogunowicz merged commit 0809aea into feature/damian/fb_kv_cache Jun 28, 2023
@dbogunowicz dbogunowicz deleted the feature/damian/decoder_engine branch June 28, 2023 12:18
@dbogunowicz dbogunowicz changed the title [WiP] [KV Cache Interface] Text Generation & Decoder Engine Implementation [KV Cache Interface] Text Generation & Decoder Engine Implementation Jun 28, 2023
bfineran added a commit that referenced this pull request Jul 12, 2023
* initial commit

* Update src/deepsparse/license.py

* limit to 150mb

* ready to review

* initial commit

* [Codegen][ORT][Static Seq Length] TextGenerationPipeline (#946)

* initial commit

* coreys simplifications

* finishing the second model static

* ready, time for beautification

* ready for review

* moved the code to examples

* fix eos logic

* add argument num_tokens_to_generate

* [CodeGen][Documentation] (#956)

* initial commit

* coreys simplifications

* finishing the second model static

* ready, time for beautification

* ready for review

* moved the code to examples

* fix eos logic

* add argument num_tokens_to_generate

* initial commit

* change order

* Update examples/codegen/README.md

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

---------

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

* reimplementation for generative pipelines

* restore text generation from examples

* [CodeGen] ONNX model loading to support >2Gb models / two engines (#991)

* refactor sucessfull

* Pipeline fully refactored, time to test engine support. Note: Sliding window not yet implemented!

* First iteration with Sage

* Apply suggestions from code review

* ORT agrees with the Engine. But they both give not entirely correct result. Hey, this is good news still

* dynamic ORT vs static DS

* pipeline handles OPT multitoken pass

* fixes to get static pipeline a little further along

* adjust shapes and slicing to enable static autoregressive pass - ISSUE: tokens past the base seq len are repeated

* migrate from cache_length to positions input

* got if working for multitoken + single token scenario

* cleanup the pipeline

* further cleanup post merge

* Pipeline working for single-token inference only

* do not load the onnx model with external files twice

* pipeline never redundantly saves the external data + more robust tokenizer

* Stop saving tmp files, otherwise the engine looks for external files in the wrong place

* Left pad support

* cleanup

* cleanup2

* Add in pipeline timing

* add in force tokens logic

* remove input validation for text generation pipelines

* remove multitoken support for now

* remove kv cache engine and other fixes

* nest input shape override

* comment out input shape override

* add non batch override for ORT

* clean up generation pipeline

* initial commit

* Update src/deepsparse/license.py

* limit to 150mb

* ready to review

* fix the erronous Makefile

* perhaps fixed GHA

* take into consideration that GHA creates four files

* initial commit

* tested with actual model

* remove val_inp argument

* Update README.md

* Apply suggestions from code review

* Update README.md

* [BugFix] Update deepsparse dockerfile (#1069)

* Remove autoinstall triggering commands

* Fix typo

* initial implementation

* working implementation for pipeline input

* [Fix] Fix CLI benchmark errors (#1071)

* initial commit

* ready for review

* Update src/deepsparse/utils/onnx.py

* Clean a typo in the pipeline code

* initial commit

* [KV Cache Interface] DecoderKVCache (#1084)

* initial implementation

* initial implementation

* Revert "initial implementation"

This reverts commit 765a5f7.

* Merge DecoderKVCache with KVCacheORT (KVCacheORT will not exist, it is just an abstraction)

* rebase

* add tests

* DecoderKVCache that manipulates cache state and additionally passes info to the engine via KVCache object

* improvements after the sync with Mark

* remove prefill

* fix the computation of total cache capacity

* address PR comments

* [WiP] [KV Cache Interface] Text Generation & Decoder Engine Implementation (#1089)

* initial commit

* Update src/deepsparse/license.py

* limit to 150mb

* ready to review

* initial commit

* [Codegen][ORT][Static Seq Length] TextGenerationPipeline (#946)

* initial commit

* coreys simplifications

* finishing the second model static

* ready, time for beautification

* ready for review

* moved the code to examples

* fix eos logic

* add argument num_tokens_to_generate

* [CodeGen][Documentation] (#956)

* initial commit

* coreys simplifications

* finishing the second model static

* ready, time for beautification

* ready for review

* moved the code to examples

* fix eos logic

* add argument num_tokens_to_generate

* initial commit

* change order

* Update examples/codegen/README.md

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

---------

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

* reimplementation for generative pipelines

* restore text generation from examples

* [CodeGen] ONNX model loading to support >2Gb models / two engines (#991)

* refactor sucessfull

* Pipeline fully refactored, time to test engine support. Note: Sliding window not yet implemented!

* First iteration with Sage

* Apply suggestions from code review

* ORT agrees with the Engine. But they both give not entirely correct result. Hey, this is good news still

* dynamic ORT vs static DS

* pipeline handles OPT multitoken pass

* fixes to get static pipeline a little further along

* adjust shapes and slicing to enable static autoregressive pass - ISSUE: tokens past the base seq len are repeated

* migrate from cache_length to positions input

* got if working for multitoken + single token scenario

* cleanup the pipeline

* further cleanup post merge

* Pipeline working for single-token inference only

* do not load the onnx model with external files twice

* pipeline never redundantly saves the external data + more robust tokenizer

* Stop saving tmp files, otherwise the engine looks for external files in the wrong place

* Left pad support

* cleanup

* cleanup2

* Add in pipeline timing

* add in force tokens logic

* remove input validation for text generation pipelines

* remove multitoken support for now

* remove kv cache engine and other fixes

* nest input shape override

* comment out input shape override

* add non batch override for ORT

* clean up generation pipeline

* initial commit

* Update src/deepsparse/license.py

* limit to 150mb

* ready to review

* fix the erronous Makefile

* perhaps fixed GHA

* take into consideration that GHA creates four files

* initial commit

* tested with actual model

* remove val_inp argument

* Update README.md

* Apply suggestions from code review

* Update README.md

* initial implementation

* initial implementation

* Revert "initial implementation"

This reverts commit 765a5f7.

* rebase

* add tests

* strip down complexity out of text generation pipeline

* initial implementation

* In a good state for the review on 22.06

* remove files to make review easier

* Revert "remove files to make review easier"

This reverts commit ea82e99.

* Merge DecoderKVCache with KVCacheORT (KVCacheORT will not exist, it is just an abstraction)

* rebase

* add tests

* Delete decoder_kv_cache.py

* Delete test_decoder_kv_cache.py

* DecoderKVCache that manipulates cache state and additionally passes info to the engine via KVCache object

* fix formatting of the transformers/utils/__init__.py

* improvements after the sync with Mark

* All changes applied, time for testing

* Scaffolding to also run multitoken

* add delay_overwriting_inputs

* multitoken is working (although in limited capacity)

* fix no kv cache inference

* Do not create engine if not needed

* remove the prefill option

* fix docstring

* remove prefill

* fix the computation of total cache capacity

* merge

* addressed PR comments

* quality

---------

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>
Co-authored-by: Mark Kurtz <mark.kurtz@neuralmagic.com>
Co-authored-by: Benjamin <ben@neuralmagic.com>

* now kv cache decoder holds information about the num of tokens preprocessed. also encountered first bug when running with the engine

* cleanup the old files

* Update src/deepsparse/transformers/engines/nl_decoder_engine.py

* ready for review

* ready for testing

* managed to get first logits right

* Delete example

* cleanup before sharing with Ben and Sage

* Update src/deepsparse/transformers/engines/nl_decoder_engine.py

* assert proper padding on pipeline init

* now also supporting kv cache perplexity. time for cleanup

* ready for review

* correctly print engine info

* work with left padding of the tokenizer

* quality

* fix the multitoken inference

* Perplexity Eval for Text Generation Models (#1073)

* initial commit

* Update src/deepsparse/license.py

* limit to 150mb

* ready to review

* initial commit

* [Codegen][ORT][Static Seq Length] TextGenerationPipeline (#946)

* initial commit

* coreys simplifications

* finishing the second model static

* ready, time for beautification

* ready for review

* moved the code to examples

* fix eos logic

* add argument num_tokens_to_generate

* [CodeGen][Documentation] (#956)

* initial commit

* coreys simplifications

* finishing the second model static

* ready, time for beautification

* ready for review

* moved the code to examples

* fix eos logic

* add argument num_tokens_to_generate

* initial commit

* change order

* Update examples/codegen/README.md

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

---------

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>

* reimplementation for generative pipelines

* restore text generation from examples

* [CodeGen] ONNX model loading to support >2Gb models / two engines (#991)

* refactor sucessfull

* Pipeline fully refactored, time to test engine support. Note: Sliding window not yet implemented!

* First iteration with Sage

* Apply suggestions from code review

* ORT agrees with the Engine. But they both give not entirely correct result. Hey, this is good news still

* dynamic ORT vs static DS

* pipeline handles OPT multitoken pass

* fixes to get static pipeline a little further along

* adjust shapes and slicing to enable static autoregressive pass - ISSUE: tokens past the base seq len are repeated

* migrate from cache_length to positions input

* got if working for multitoken + single token scenario

* cleanup the pipeline

* further cleanup post merge

* Pipeline working for single-token inference only

* do not load the onnx model with external files twice

* pipeline never redundantly saves the external data + more robust tokenizer

* Stop saving tmp files, otherwise the engine looks for external files in the wrong place

* Left pad support

* cleanup

* cleanup2

* Add in pipeline timing

* add in force tokens logic

* remove input validation for text generation pipelines

* remove multitoken support for now

* remove kv cache engine and other fixes

* nest input shape override

* comment out input shape override

* add non batch override for ORT

* clean up generation pipeline

* initial commit

* Update src/deepsparse/license.py

* limit to 150mb

* ready to review

* fix the erronous Makefile

* perhaps fixed GHA

* take into consideration that GHA creates four files

* initial commit

* tested with actual model

* remove val_inp argument

* Update README.md

* Apply suggestions from code review

* Update README.md

* [BugFix] Update deepsparse dockerfile (#1069)

* Remove autoinstall triggering commands

* Fix typo

* initial implementation

* working implementation for pipeline input

* [Fix] Fix CLI benchmark errors (#1071)

* initial commit

* ready for review

* Update src/deepsparse/utils/onnx.py

* Clean a typo in the pipeline code

* cleanup the old files

* Update src/deepsparse/transformers/engines/nl_decoder_engine.py

* ready for review

* ready for testing

* assert proper padding on pipeline init

* now also supporting kv cache perplexity. time for cleanup

* ready for review

* correctly print engine info

* work with left padding of the tokenizer

* quality

* fix the multitoken inference

---------

Co-authored-by: corey-nm <109536191+corey-nm@users.noreply.github.com>
Co-authored-by: Mark Kurtz <mark.kurtz@neuralmagic.com>
Co-authored-by: Benjamin <ben@neuralmagic.com>
Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>

* [Text Generation] Run deepsparse engine without the LIB.kv_cache object (#1108)

* Update src/deepsparse/transformers/engines/nl_decoder_engine.py

* fixed the logic to assert correct multibatch inference

* fix integration tests

* initial implementation

* fix the integration test

* better solution for fixing the issues caused by this PR in GHA

* revert changes to yolo pipeline

* Update src/deepsparse/transformers/engines/nl_decoder_engine.py

Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>

* response to Rahuls comments

---------

Co-authored-by: Mark Kurtz <mark.kurtz@neuralmagic.com>
Co-authored-by: Benjamin <ben@neuralmagic.com>
Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants