Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chunk download #429

Merged
merged 15 commits into from
Feb 12, 2024
Merged

Chunk download #429

merged 15 commits into from
Feb 12, 2024

Conversation

horheynm
Copy link
Member

@horheynm horheynm commented Feb 5, 2024

Desctiption

Chunk download if endpoint supports chunked download.

Problem

Downloading large files tend to fail

Solution

Add chunk download, redownload the chunk if it fails less than N times.

Design

Use threads to speed up downloads. Download speed is limited by bandwidth and write speed to the disk. Write to disk speed usually differ by usage. so constant write time.

Any download is a job. If chunk download, we need to combine and delete the chunk files so they are a job also.
We have two queues.

  1. Job queue - run concurrently -- doesnt matter what file gets downloaded
  2. Job queues - run only when the previous job_queue finishes -- used for combining chunks

Ex.
job_queue1 = Queue()
download_job1 = Job(...)
download_job2 = Job(...)
job_queue1.put(download_job1).put(download_job2)

job_queue2 = Queue()
combine_job1 = Job(...)
job_queue2.put(combine_job1)

job_queues = Queue()
job_queues.put(job_queue1)
job_queues.put(job_queue2) # run this queue if the previous job_queue is done to guarantee downloads.

Usage:

url = "https://download-some-file.com/"
dest_path = "your/local/folder"
max_retries= 3
downloader = Downloader(
    url=url_path, download_path=dest_path, max_retries=num_retries
)
downloader.download()

Code

stub = "zoo:llama2-7b-ultrachat200k_llama2_pretrain-pruned80"
print(stub)

model = Model(stub)
p = model.onnx_model.path

Output:

(.venv) george@gpuserver6:~/sparsezoo$ python3 -m scratch.down
zoo:llama2-7b-ultrachat200k_llama2_pretrain-pruned80
Downloading Chunks: 100%|█████████████████████████████████████████████████████████| 5.27G/5.27G [00:47<00:00, 112MB/s]
Combining Chunks: 100%|██████████████████████████████████████████████████████████| 5.27G/5.27G [00:05<00:00, 1.01GB/s]
(.venv) george@gpuserver6:~/sparsezoo$ 

Testing

Basic url mock and donwload count. actual download is covered in e2e

@horheynm horheynm marked this pull request as draft February 6, 2024 16:34
@horheynm horheynm changed the title chunk download, break down into 10 Chunk download Feb 9, 2024
@horheynm horheynm marked this pull request as ready for review February 9, 2024 19:18
dbogunowicz
dbogunowicz previously approved these changes Feb 12, 2024
src/sparsezoo/utils/download.py Outdated Show resolved Hide resolved
src/sparsezoo/utils/download.py Outdated Show resolved Hide resolved
src/sparsezoo/utils/download.py Outdated Show resolved Hide resolved
src/sparsezoo/utils/download.py Outdated Show resolved Hide resolved
src/sparsezoo/utils/download.py Outdated Show resolved Hide resolved
src/sparsezoo/utils/download.py Outdated Show resolved Hide resolved
src/sparsezoo/utils/download.py Outdated Show resolved Hide resolved
src/sparsezoo/utils/download.py Outdated Show resolved Hide resolved
@dbogunowicz
Copy link
Contributor

So @horheynm, just so I understand. The logic now computes the amounts of chunks that are to be downloaded, uses concurrency to download all the chunks asynchronously, and the last job is there to "concatenate" all the chunks into a single file right?

bfineran
bfineran previously approved these changes Feb 12, 2024
Copy link
Member

@bfineran bfineran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks great @horheynm, very easy to follow. Would be great to add some simple unit tests to confirm the chunking is happening - e2e functionality is definitely thoroughly covered since every other pathway uses download

src/sparsezoo/utils/download.py Outdated Show resolved Hide resolved
src/sparsezoo/utils/download.py Outdated Show resolved Hide resolved
src/sparsezoo/utils/download.py Show resolved Hide resolved
src/sparsezoo/utils/download.py Show resolved Hide resolved
@horheynm
Copy link
Member Author

So @horheynm, just so I understand. The logic now computes the amounts of chunks that are to be downloaded, uses concurrency to download all the chunks asynchronously, and the last job is there to "concatenate" all the chunks into a single file right?

yes

@bfineran bfineran merged commit e2924c1 into main Feb 12, 2024
4 checks passed
@bfineran bfineran deleted the chunk-download branch February 12, 2024 21:04
bfineran pushed a commit that referenced this pull request Feb 13, 2024
* chunk download, break down into 10

* lint

* threads download

* draft

* chunk download draft

* job based download and combining/deleteing chunks

* delete old code

* lint

* fix num jobs if file_size is less than the chunk size

* doc string and return types

* test

* lint
bfineran pushed a commit that referenced this pull request Feb 13, 2024
* chunk download, break down into 10

* lint

* threads download

* draft

* chunk download draft

* job based download and combining/deleteing chunks

* delete old code

* lint

* fix num jobs if file_size is less than the chunk size

* doc string and return types

* test

* lint
Satrat added a commit that referenced this pull request Feb 22, 2024
* `RegistryMixin` improved alias management (#404)

* initial commit

* add docstrings

* simplify

* hardening

* refactor

* format registry lookup strings to be lowercases

* standardise aliases

* Move evaluator registry (#411)

* More control over external data size (#412)

* When splitting external data, avoid renaming `model.data` to `model.data.1` if only one external data file gets eventually saved (#414)

* [model.download] fix function returning nothing (#420)

* [BugFix] Path not expanded (#418)

* [Fix] Allow for processing Path in the sparsezoo analysis (#417)

* Raise TypeError instead of ValueError (#426)

* Fix misleading docstring (#416)

Add test

* add support for benchmark.yaml (#415)

* add support for benchmark.yaml

recent zoo models use `benchmark.yaml` instead of `benchmarks.yaml`. adding this additional pathway so `benchmark.yaml` is downloaded in the bulk model download

* update files filter

* fix tests

---------

Co-authored-by: dbogunowicz <damian@neuralmagic.com>

* [BugFix] Add analyze to init (#421)

* Add analyze to init

* Move onnxruntime to deps

* Print model analysis (#423)

* [model.download] fix function returning nothing (#420)

* [BugFix] Path not expanded (#418)

* print model-analysis

* [Fix] Allow for processing Path in the sparsezoo analysis (#417)

* add print statement at the end of cli run

---------

Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>
Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Omit scalar weight (#424)

* ommit scalar weights:

* remove unwanted files

* comment

* Update src/sparsezoo/utils/onnx/analysis.py

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

---------

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

---------

Co-authored-by: George <george@neuralmagic.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>
Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>

* update analyze help message for correctness (#432)

* initial commit (#430)

* [sparsezoo.analyze] Fix pathway such that it works for larger models (#437)

* fix analyze to work with larger models

* update for failing tests; add comments

* Update src/sparsezoo/utils/onnx/external_data.py

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

---------

Co-authored-by: Dipika Sikka <dipikasikka1@gmail.coom>
Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>

* Delete hehe.py (#439)

* Download deployment dir for llms (#435)

* Download deployment dir for llms

* Use path instead of download

* only set save_as_external_data to true if the model originally had external data (#442)

* Add Channel Wise Quantization Support (#441)

* Chunk download (#429)

* chunk download, break down into 10

* lint

* threads download

* draft

* chunk download draft

* job based download and combining/deleteing chunks

* delete old code

* lint

* fix num jobs if file_size is less than the chunk size

* doc string and return types

* test

* lint

* fix type hints (#445)

* fix bug if the value is a dict (#447)

* [deepsparse.analyze] Fix v1 functionality to  work with llms (#451)

* fix equivalent changes made to analyze_v2 such that inference session works for llms; update wanrings to be debug printouts

* typo

* overwrite file (#450)

Co-authored-by: 21 <a21@21s-MacBook-Pro.local>

* Adds a `numpy_array_representer` to yaml (#454)

on runtime, to avoid serialization issues

* Avoid division by zero (#457)

Avoid log of zero

* op analysis total counts had double sparse counts (#461)

* Rename legacy analyze to analyze_v1 (#459)

* Fixing Quant % Calcuation (#462)

* initial fix

* style

* Include Sparsity in Size Calculation (#463)

* initial fix

* style

* incorporate sparsity into size calculation

* quality

* op analysis total counts had double sparse counts (#461)

* Fixing Quant % Calcuation (#462)

* initial fix

* style

* Include Sparsity in Size Calculation (#463)

* initial fix

* style

* incorporate sparsity into size calculation

* quality

* Revert "Merge branch 'main' into analyze_cherry_picks"

This reverts commit 509fa1a, reversing
changes made to 08f94c4.

---------

Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>
Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
Co-authored-by: dbogunowicz <damian@neuralmagic.com>
Co-authored-by: George <george@neuralmagic.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.coom>
Co-authored-by: 21 <a21@21s-MacBook-Pro.local>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants