You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I installed the llama 2-7b model from the official Llama website and followed the instruction. But I encountered an error when trying to load the Llama model from the directory C:\Users\Admin\checkpoints\Llama-2-7b. The error message indicates that the necessary model files (pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index, or flax_model.msgpack) are not found in the specified directory.
Output
<OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory C:\Users\Admin\.llama\checkpoints\Llama-2-7b.>
Runtime Environment
Model: llama-2-7b-chat
Using via huggingface?: no
OS: Windows
The text was updated successfully, but these errors were encountered:
Even I faced the same problem when I downloaded official Llama(Meta) website llama 2-7b I got consolidated.00.pth
If you are looking for solution,
You can covert the code from .pth to .bin using the following transformer code at line 39 they have mention the steps how to load
Describe the bug
I installed the llama 2-7b model from the official Llama website and followed the instruction. But I encountered an error when trying to load the Llama model from the directory C:\Users\Admin\checkpoints\Llama-2-7b. The error message indicates that the necessary model files (pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index, or flax_model.msgpack) are not found in the specified directory.
Output
Runtime Environment
llama-2-7b-chat
The text was updated successfully, but these errors were encountered: