You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have finetuned Llama 2 7B hf model using PEFT. Merged the peft with base model and tried converting to gguf using llama.cpp. Its giving the following error
python llama.cpp/convert_hf_to_gguf.py merged_model
INFO:hf-to-gguf:Loading model: merged_model
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Exporting model...
INFO:hf-to-gguf:gguf: loading model part 'model.safetensors'
INFO:hf-to-gguf:output.weight, torch.float16 --> F16, shape = {4096, 32002}
INFO:hf-to-gguf:token_embd.weight, torch.float16 --> F16, shape = {4096, 32002}
INFO:hf-to-gguf:blk.0.attn_norm.weight, torch.float16 --> F32, shape = {4096}
INFO:hf-to-gguf:blk.0.ffn_down.weight, torch.uint8 --> F16, shape = {1, 22544384}
Traceback (most recent call last):
File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 5378, in <module>
main()
File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 5372, in main
model_instance.write()
File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 440, in write
self.prepare_tensors()
File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 1737, in prepare_tensors
super().prepare_tensors()
File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 299, in prepare_tensors
for new_name, data_torch in (self.modify_tensors(data_torch, name, bid)):
File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 1705, in modify_tensors
return [(self.map_tensor_name(name), data_torch)]
File "/home/abc/fink/llama.cpp/convert_hf_to_gguf.py", line 215, in `map_tensor_name`
raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight.absmax'
The text was updated successfully, but these errors were encountered:
I have finetuned Llama 2 7B hf model using PEFT. Merged the peft with base model and tried converting to gguf using llama.cpp. Its giving the following error
The text was updated successfully, but these errors were encountered: