-
-
Notifications
You must be signed in to change notification settings - Fork 8.8k
Use RMM's pached CCCL #11351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use RMM's pached CCCL #11351
Conversation
Make sure to search for RMM if it will be used. This should pick up the patched CCCL from RMM. If RMM is not being used and this is a CUDA build, search for CCCL explicitly.
Thanks Hyunsu! 🙏 Applied a similar change to 2.1.x with PR: #11353 |
Tested a backport of this change to XGBoost 2.1.x Tried this in RAPIDS and confirmed RMM is now picked up: rapidsai/xgboost-feedstock#85 (comment) Also confirmed building without RMM (like in conda-forge) works unchanged: conda-forge/xgboost-feedstock#222 (comment) So this appears to be working as intended |
I find the behavior confusing when I work on this. I use everything from conda including cmake, compilers, and rmm. I have seen cases where cmake ignores cccl from conda env and jumps to system ctk. I have also seen it using the one from rmm (env/include/rapids/) without any configuration. I didn't try to debug it at the time, but this does give me a feeling that the configuration is fragile. |
Merging, we can continue the discussion if the issue ever comes up again. |
@jakirkham Should we backport it to the 3.0 branch? |
Yes let's do that Have submitted PR: #11354 |
Make sure to search for RMM if it will be used. This should pick up the patched CCCL from RMM.
If RMM is not being used and this is a CUDA build, search for CCCL explicitly.