-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Intel-MPI for both ifort and gfortran (allow ESMF_COMM=intelmpi) #35
Conversation
Based on "Expanding MPI Options" in GCHP doc: http://wiki.seas.harvard.edu/geos-chem/index.php/Setting_Up_the_GCHP_Environment
Adapted from Linux.intel.default/build_rules.mk, but uses mpifort/mpicxx instead of mpiifort/mpiicpc. In Intel-MPI, mpifort wraps gfortran, while mpiifort wraps ifort. Ref: https://software.intel.com/en-us/mpi-developer-guide-linux-compilers-support Can also force mpifort to wrap ifort by setting I_MPI_CC=ifort Ref: https://software.intel.com/en-us/mpi-developer-reference-windows-compilation-environment-variables
Weirdly, code still compiles without this fix.
Also Intel MPI does not have
It would be much more ideal to have a consistent, top-level |
Thanks for these updates. In the move from GNU make to CMake in an upcoming version all Makefiles will be removed. That release is planned to be released later this year. The update should make these problems a non-issue but I will plan on bringing them into 12.7 anyway since I assume they would be helpful in the short-term. Regarding the ESMF build rules files, soon ESMF will be built as an external library. Any sorts of changes needed for ESMF compatibility with libraries should be raised with ESMF Support. In anticipation of the change I updated the GEOS-Chem documation earlier today. See the ESMF libary section of the GCHP software requirements page. If you want to start testing ESMF with your libraries on the Cloud I suggest checking out the tag for v8.0.0 beta snapshot 40. As for GCHP's fragile build system, we are all really hoping it becomes much more robust with the move to CMake. The special handling for different MPI and compilers is indeed historical. Finally, if you interested in following progress on the upcoming GCHP version update with the move to CMake see our new GCHP repository on GitHub: https://github.com/geoschem/gchp_ctm/. Let me know if you want to be a beta tester. |
That's a great move and I can't wait to see it👍
I took a look at To avoid changing ESMF, you can only add the non-ESMF part of this PR ( The ESMF part is mostly fine, as it is calling MPI wrappers instead of hacking As a side node, ESMF 8.0.0 contains a CMake option. See ESMF 8.0.0 doc.
For deployment it is indeed much better to use this new one with a cleaner build system. For benchmark I would first try the current version that I am most familiar with... Right now I just want to get some performance numbers quickly; let's talk about deploying the new version when you think it's ready for users. |
GCHP 12.5.0 was just approved and 12.6.0 will be benchmarked shortly. I think these updates can go into 12.6 despite my earlier comment about 12.7. Could you update this pull request to be on dev/12.6.0? You will need to make at least one minor changes for compatibility since there have been makefile updates since 12.3.2 (master at the time of this pull request). More specifically, MPI-related code is now in Shared/Config/mpi.mk rather than ESMA_base.mk. I think it is fine for these updates to ESMF currently in GCHP to go in. The switch to separate ESMF is still at least a couple months away, at which point if Intel-MPI options are still not available in ESMF v8.0.0 then we can add instructions to users on the wiki on what manual edits to make. The ESMF v8.0.0 public release is expected this month. It may not be too late to give this update to ESMF developers to include. Have you done a pull request with them for this update? I am surprised about a CMake option being added to v8.0.0. I asked if this was going to happen earlier this year and was told there were no plans for updating the build options. This is definitely worth looking into if using GNU make poses problems for users and the CMake option solves them. If you play around with it please do report back on what you find. |
Wonderful, will test it shortly.
ESMF is on Source Forge which has more centralized version control. I would love to see it being moved to GitHub... |
Right, ESMF is not on GitHub. You may just need to email them. According to their website "The ESMF team happily accepts code contributions from the community." GCHP 12.5.0 may not be master immediately. Make sure you check your tags for both repos to be sure you have the right version. |
@JiaweiZhuang What is the status of this PR? ESMF will be part of the GCHP repository for the remaing version 12 releases but will be removed beyond that. We can still bring this in now if you feel it is important for 12.7/12.8. |
@lizziel The only real crucial thing is to make sure that IntelMPI + Gfortran work correctly in the CI build matrix (geoschem/GCHP#1). If the GCHP CI is going start with 12.7/12.8, then this PR should be merged in soon. If the CI is going to start with 13.0.0, then the current fix is not important, but we need to figure out the corresponding fix for 13.0.0 (if still needed at all) @LiamBindle are you going to start with 12.x or 13.0? |
@JiaweiZhuang my plan is for GCHP CI in 13.0 (i.e. the gchp_ctm repo) because that's where GCHP's CMake support is going to be added |
@LiamBindle OK great -- just make sure that Intel-MPI is included in the build matrix. Use this PR as reference if you hit issues. |
@JiaweiZhuang sounds good, thanks for letting me know. I'll make sure to include Intel-MPI |
This PR is no longer relevant to GCHP now with the retirement of GNU Make and the separation of ESMF in 13.0.0. I am therefore closing this PR without merge. |
I have to use Intel MPI on AWS due to aws/aws-parallelcluster#1143 (comment). Just successfully built 12.3.2 with Intel MPI (with both ifort and gfortran). Need to allow
ESMF_COMM=intelmpi
following Expanding MPI Options in GCHP doc.Adding this option to the top-level
GIGC.mk
seems quite natural, becauseintelmpi
is already defined in files likeLinux.intel.default/build_rules.mk
:https://github.com/geoschem/gchp/blob/305a133c749c18b1aa96d80bff4431fa6e77e92e/ESMF/build_config/Linux.intel.default/build_rules.mk#L67-L72
Notes
compiler command
Unlike the
mpiifort
/mpiicpc
inLinux.intel.default/build_rules.mk
, I usempifort
/mpicxx
inLinux.gfortran.default/build_rules.mk
. In Intel-MPI,mpifort
wrapsgfortran
, whilempiifort
wrapsifort
: https://software.intel.com/en-us/mpi-developer-guide-linux-compilers-support.(Can also force
mpifort
to wrapifort
by settingI_MPI_CC=ifort
: https://software.intel.com/en-us/mpi-developer-reference-windows-compilation-environment-variables; butmpiifort
always points toifort
regardless)MPI path
Intel-MPI puts
libmpi.so
insidelib/release
instead oflib
:With Spack, the proper
$MPI_ROOT
should beQuestion
I am curious why it is necessary to define build rules for every MPI instead of just calling the
mpicc
/mpifort
wrapper? Just historical reasons? GCHP's build system is so fragile, largely due to those compiler/MPI-specific configs. Those high-level MPI wrappers are exactly invented to solve this problem....