We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi guys! I use py3langid==0.2.2 and I found that in some cases Chinese language has higher probability than it probably should be. For example
identifier = LanguageIdentifier.from_pickled_model(MODEL_FILE, norm_probs=True) identifier.rank("Al furjan")
outputs: [('zh', 0.24405981600284576), ('fi', 0.16715779900550842), ('mt', 0.1392195224761963), ('et', 0.10675894469022751), ('sl', 0.07787516713142395), ('en', 0.05285739526152611)......]
I understand that the text is quite short and it may return languages other that English, but Chinese?
The text was updated successfully, but these errors were encountered:
The original model is error-prone on short texts, as you say this is clearly a bug though.
Sorry, something went wrong.
No branches or pull requests
Hi guys!
I use py3langid==0.2.2 and I found that in some cases Chinese language has higher probability than it probably should be. For example
outputs:
[('zh', 0.24405981600284576), ('fi', 0.16715779900550842), ('mt', 0.1392195224761963), ('et', 0.10675894469022751), ('sl', 0.07787516713142395), ('en', 0.05285739526152611)......]
I understand that the text is quite short and it may return languages other that English, but Chinese?
The text was updated successfully, but these errors were encountered: