-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After using lora to fine-tune the GLM-4 model, the chat template format is wrong #5986
Closed
1 task done
Labels
solved
This problem has been already solved
Comments
请问原模板中的item['metadata']是什么数据 |
2 tasks
Fixed in #6369 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Reminder
System Info
llamafactory 0.9.0
Reproduction
glm-4-9b-chat Original template
After using llama-factory fine-tuning
Expected behavior
After fine-tuning with llamafactory, the chat template of the model file is wrong, which leads to subsequent reasoning errors. I want to submit a contribution to llamafactory. Which part of the code can construct the chat template? How can I modify it?
Others
thanks!
The text was updated successfully, but these errors were encountered: