Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSError: Missing model files in the Llama directory #1323

Open
Cafarli opened this issue Feb 10, 2025 · 1 comment
Open

OSError: Missing model files in the Llama directory #1323

Cafarli opened this issue Feb 10, 2025 · 1 comment

Comments

@Cafarli
Copy link

Cafarli commented Feb 10, 2025

Describe the bug

I installed the llama 2-7b model from the official Llama website and followed the instruction. But I encountered an error when trying to load the Llama model from the directory C:\Users\Admin\checkpoints\Llama-2-7b. The error message indicates that the necessary model files (pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index, or flax_model.msgpack) are not found in the specified directory.

Output

<OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory C:\Users\Admin\.llama\checkpoints\Llama-2-7b.>

Runtime Environment

  • Model: llama-2-7b-chat
  • Using via huggingface?: no
  • OS: Windows
@PrinceAlmeida
Copy link

PrinceAlmeida commented Mar 4, 2025

Even I faced the same problem when I downloaded official Llama(Meta) website llama 2-7b I got consolidated.00.pth

If you are looking for solution,
You can covert the code from .pth to .bin using the following transformer code at line 39 they have mention the steps how to load

https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants