Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.llama\checkpoints\Llama-3.2-3B-Instruct vs. .llama\checkpoints\Llama3.2-3B-Instruct #1327

Open
rholowczak opened this issue Feb 20, 2025 · 1 comment

Comments

@rholowczak
Copy link

Before submitting a bug, please make sure the issue hasn't been already addressed by searching through the FAQs and existing/past issues

Describe the bug

I ran the following command to download Llama-3.2-3B-Instruct

llama model download --source meta --model-id meta-llama/Llama-3.2-3B-Instruct

After downloading, I see the files ended up in
.llama\checkpoints\Llama3.2-3B-Instruct

Notice that after the word "Llama" there is no dash before the 3.2.

Can this be repaired in some way?

Do all models have this inconsistency?

Minimal reproducible example

llama model download --source meta --model-id  meta-llama/Llama-3.2-3B-Instruct

dir C:\Users\rholo\.llama\checkpoints\Llama-3.2-3B-Instruct
Volume in drive C is OS
Volume Serial Number is C0B6-68DC
Directory of C:\Users\rholo\.llama\checkpoints

File Not Found


dir C:\Users\rholo\.llama\checkpoints\Llama3.2-3B-Instruct
Volume in drive C is OS
Volume Serial Number is C0B6-68DC

Directory of C:\Users\rholo\.llama\checkpoints\Llama3.2-3B-Instruct

02/20/2025  12:43 PM    <DIR>          .
02/20/2025  12:43 PM    <DIR>          ..
02/20/2025  12:43 PM               209 checklist.chk
02/20/2025  12:44 PM     6,425,585,114 consolidated.00.pth
02/20/2025  12:43 PM               220 params.json
02/20/2025  12:43 PM         2,183,982 tokenizer.model


Runtime Environment

  • Model: Llama-3.2-3B-Instruct
  • Using via huggingface?: No
  • OS: Windows 11
  • GPU VRAM: 2GB
  • Number of GPUs: 1
  • GPU Make: Nvidia

Additional context
Add any other context about the problem or environment here.

@PrinceAlmeida
Copy link

Hey Rholo,

Which model did you download.

Llama3.2-3B-Instruct:int4-qlora-eo8 │ meta-llama/Llama-3.2-3B-Instruct-QLORA_INT4_EO8 │ 8K

Llama3.2-3B-Instruct:int4-spinquant-eo8 │ meta-llama/Llama-3.2-3B-Instruct-SpinQuant_INT4_EO8 │ 8K

The file not found Error you are getting because you are trying to load consolidated.00.pth using transformers

If you want to use transformers you need to have .bin file

Use the below code to convert from .pth to .bin

At line 39 they have mention all the step how to use the script

https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants