Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mixtral-8x7B ip2_xs ppl mismatch from #4856 #5451

Closed
fgdfgfthgr-fox opened this issue Feb 11, 2024 · 1 comment
Closed

Mixtral-8x7B ip2_xs ppl mismatch from #4856 #5451

fgdfgfthgr-fox opened this issue Feb 11, 2024 · 1 comment

Comments

@fgdfgfthgr-fox
Copy link

fgdfgfthgr-fox commented Feb 11, 2024

Information in #4856 suggest that at 2.31 bpw (IQ2_XS) the ppl of Mixtral should be 4.514. However, the actual ppl obtained via perplexity calculation are much higher.
Below is the command I used to calculate the ppl. I get 5.275 as result.
The quantized model was directly obtained from https://huggingface.co/ikawrakow/various-2bit-sota-gguf (mixtral-8x7b-2.34bpw.gguf), which should has imatrix. I also quantized a Mixtral model myself with imatrix, the ppl is slightly different but is still around 5.27.
@ikawrakow

$ ./perplexity -m /mnt/2878EBCCAED823C6/koboldcpp-rocm/mixtral/ikaws_mixtral-8x7b-2.34bpw.gguf -f /mnt/2878EBCCAED823C6/Downloads/wiki.test.raw -ngl 33
main: build = 2112 (4b7b38be)
main: built with AMD clang version 17.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-6.0.0 23483 7208e8d15fbf218deb74483ea8c549c67ca4985e) for x86_64-unknown-linux-gnu
main: seed  = 1707645786
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
  Device 0: AMD Radeon VII, compute capability 9.0, VMM: no
llama_model_loader: loaded meta data with 25 key-value pairs and 995 tensors from /mnt/2878EBCCAED823C6/koboldcpp-rocm/mixtral/ikaws_mixtral-8x7b-2.34bpw.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = hf
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 20
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                      tokenizer.ggml.merges arr[str,58980]   = ["▁ t", "i n", "e r", "▁ a", "h e...
llama_model_loader: - kv  19:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  21:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q2_K:   33 tensors
llama_model_loader: - type q4_K:   32 tensors
llama_model_loader: - type q5_K:    1 tensors
llama_model_loader: - type iq2_xs:  832 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = IQ2_XS - 2.3125 bpw
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 12.73 GiB (2.34 BPW) 
llm_load_print_meta: general.name     = hf
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.76 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size = 12995.95 MiB
llm_load_tensors:        CPU buffer size =    41.02 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      ROCm0 KV buffer size =    64.00 MiB
llama_new_context_with_model: KV self size  =   64.00 MiB, K (f16):   32.00 MiB, V (f16):   32.00 MiB
llama_new_context_with_model:  ROCm_Host input buffer size   =     9.01 MiB
llama_new_context_with_model:      ROCm0 compute buffer size =   125.98 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =     8.80 MiB
llama_new_context_with_model: graph splits (measure): 3

system_info: n_threads = 6 / 12 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
perplexity: tokenizing the input ..
perplexity: tokenization took 835.625 ms
perplexity: calculating perplexity over 642 chunks, batch_size=512
perplexity: 3.67 seconds per pass - ETA 39.22 minutes
[1]3.6214,...,[642]5.2750,
Final estimate: PPL = 5.2750 +/- 0.02860

llama_print_timings:        load time =   10845.48 ms
llama_print_timings:      sample time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings: prompt eval time = 2477684.02 ms / 328704 tokens (    7.54 ms per token,   132.67 tokens per second)
llama_print_timings:        eval time =       0.00 ms /     1 runs   (    0.00 ms per token,      inf tokens per second)
llama_print_timings:       total time = 2481574.93 ms / 328705 tokens

Just to add, the ppl of iq3_xxs match with the #5196 , which is around 4.456. I haven't test out other model or quant yet.

In case you ask, no, it's nothing to do with koboldcpp. The models are in the koboldcpp folder, but the actual ppl calculation uses llamacpp.

@fgdfgfthgr-fox fgdfgfthgr-fox changed the title Mixtral-8x7B ip2_xs ppl mismatch from the pull request Mixtral-8x7B ip2_xs ppl mismatch from #4856 Feb 11, 2024
@fgdfgfthgr-fox
Copy link
Author

fgdfgfthgr-fox commented Feb 12, 2024

Oops, I think I made a mistake. The ppl in #4856 was calculated using 4096 ctx rather than the default. I haven't done recalculating it yet, but from what I got so far, it should be close to 4.5.
Update: Yes the number matches.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant