OpenMPI problem on Scientific Linux 5.2

Trying to run the simplest Fortran code that still links to the MPI library provided by OpenMPI will fail. The file is named test.F90 and the code is compiled successfully using mpif90:

mpif90 -o test test.F90

The code itself:

program main
  use mpi
  implicit none
  integer :: ierror
  call mpi_init(ierror)
  call mpi_finalize(ierror)
end program main

Trying to run the binary:

libibverbs: Fatal: couldn't read uverbs ABI version.
[0,0,0]: OpenIB on host localhost was unable to find any HCAs.
Another transport will be used instead, although this may result in
lower performance.
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
libibverbs: Fatal: couldn't read uverbs ABI version.
CMA: unable to open /dev/infiniband/rdma_cm

Warnings can be over passed, but the expected output, which is null, isn’t there. Checking with strace it looked like there was a deadlock somewhere.
In order to make it work I had to download OpenMPI source code from and compile it without the OpenIB support:

./configure –prefix=/opt/local/openmpi --without-openib

I also had to set the shared libraries variable LD_LIBRARY_PATH:

export LD_LIBRARY_PATH=/opt/local/openmpi/lib

4 thoughts on “OpenMPI problem on Scientific Linux 5.2

  1. FWIW, this should be better in the upcoming Open MPI v1.3 (not yet released). I assume you don’t have the OFED/IB kernel modules loaded, which is why you are getting this message. We made the detection of this kind of condition smarter, since the OFED libraries are now shipping in many Linuxes (they weren’t when we initially released OMPI v1.2, so it was safe to assume that if you compiled with OFED support, you wanted to use it).

    If you could try a nightly OMPI v1.3 tarball and confirm that your trivial test program works, that would be great (it works for me in a development environment, but having a successful test “in the wild” would be most helpful).


  2. panoskrt

    My SL5.2 setup was pretty much a full installation so I’d except the kernel modules to be loaded.
    I tried with a nightly release (openmpi-1.3a1r19713) as you suggested and it worked like a charm😀

  3. Emmanuel Lambert

    I experienced the same problem on Scientific Linux 5.4 with the pre-packaged OpenMPI 1.3.2 RPM. Building OpenMPI from the 1.3.3 source with the without-openib flag solved the issue. Thanks a lot for this valuable tip !!

  4. Actually, the credits go to a colleague of mine who spotted out the problem with OpenIB. Cheers anyway and glad to hear that you found that helpful😉

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s