Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while converting peft finetuned merged model to gguf #1331

Open
saivineethabvns opened this issue Mar 21, 2025 · 0 comments
Open

Error while converting peft finetuned merged model to gguf #1331

saivineethabvns opened this issue Mar 21, 2025 · 0 comments

Comments

@saivineethabvns
Copy link

I have finetuned Llama 2 7B hf model using PEFT. Merged the peft with base model and tried converting to gguf using llama.cpp. Its giving the following error

python llama.cpp/convert_hf_to_gguf.py  merged_model
INFO:hf-to-gguf:Loading model: merged_model
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Exporting model...
INFO:hf-to-gguf:gguf: loading model part 'model.safetensors'
INFO:hf-to-gguf:output.weight,               torch.float16 --> F16, shape = {4096, 32002}
INFO:hf-to-gguf:token_embd.weight,           torch.float16 --> F16, shape = {4096, 32002}
INFO:hf-to-gguf:blk.0.attn_norm.weight,      torch.float16 --> F32, shape = {4096}
INFO:hf-to-gguf:blk.0.ffn_down.weight,       torch.uint8 --> F16, shape = {1, 22544384}
Traceback (most recent call last):
  File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 5378, in <module>
    main()
  File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 5372, in main
    model_instance.write()
  File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 440, in write
    self.prepare_tensors()
  File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 1737, in prepare_tensors
    super().prepare_tensors()
  File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 299, in prepare_tensors
    for new_name, data_torch in (self.modify_tensors(data_torch, name, bid)):
  File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 1705, in modify_tensors
    return [(self.map_tensor_name(name), data_torch)]
  File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 215, in `map_tensor_name`
    raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight.absmax'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant