Skip to content

SYCL. Fix multiclass for sycl iGPUs #11292

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: release_3.0.0
Choose a base branch
from

Conversation

razdoburdin
Copy link
Contributor

The PR #11029 generalizes calculations of multiclass objectives for sycl. But some sycl devices (integrated graphics) don't support fp64 calculations. This fix adds dispatching to gradient calculations. And returns the basic xgboost functional to iGPUs.

@razdoburdin razdoburdin marked this pull request as draft February 27, 2025 11:03
@trivialfis
Copy link
Member

I will look into refactoring the code after the 3.0 release. Dispatching f64 seems to be quite intrusive for the codebase. We use f64 a lot.

@trivialfis
Copy link
Member

trivialfis commented Feb 27, 2025

Does iGPU with sycl have a software-simulated f64 implementation? It seems quite restrictive to not have a standard floating point. You are not going to get very far without f64 for traditional machine learning algorithms. (R uses f64 as default, Python float is f64, sklearn, numpy also uses f64 as default).

@napetrov
Copy link
Contributor

This is less of iGPUs but particular generation of ARC systems, with newer Batlemage generation having fp64 support.

@trivialfis
Copy link
Member

That's great news, we can drop the workarounds for old generation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants