[Pkg-openmpi-maintainers] Bug#502232: Bug#502232: libopenmpi-dev: No static libraries in the package

Jeff Squyres jsquyres at cisco.com
Wed Oct 15 22:18:25 UTC 2008


On Oct 15, 2008, at 4:40 PM, Manuel Prinz wrote:

>> However, keep in mind that compiling statically with OpenFabrics adds
>> an unfortunate new dimension of complexity in terms of creating  
>> fully-
>> static MPI libraries and applications:
>>
>>     http://www.open-mpi.org/faq/?category=openfabrics#ib-static-mpi-apps
>
> True. The libibverbs issue seems to only add complexity for the  
> user, as
> there's just static versions of OpenMPI and libibverbs needed. We just
> need to provide those, the compiling and linking of the application is
> done by an MPI user. Did I get that right?

Yes and no.  Open MPI also provides "wrapper" compilers: mpicc and  
friends.  These executables provide all the compiler and linker flags  
necessary to compile an MPI application (because sometimes there's  
many flags required -- so we have found it best to hide all this stuff  
that just provide these "wrapper" compilers that user can just s/gcc/ 
mpicc/g in their Makefiles and the Right magic happens).  However,  
regardless of how you build OMPI, there's only *one* set of wrapper  
compilers created.

Specifically: the wrapper compilers will not include the "whole  
archive" options by default, nor will they include the .a's for the  
libibverbs plugins for your specific hardware (libmthca.a was the  
example used in the FAQ; there are several other libibverbs plugins  
for other hardware as well).  However, OMPI's wrapper compilers,  
although they are binary executables, are completely driven by  
configuration information in a text file.  So it's possible to edit  
this text file and put in the "whole archive" and relevant libibverbs  
plugin libraries.  Hence, users who use "mpicc" and friends would  
magically get the Right stuff for static compilation.

That being said, this kinda breaks the whole model of:

shell$ gcc myapp.c           # produces dynamic linked executable
shell$ gcc myapp.c -static   # produces static linked executable

Because in this case, it's not as simple as just listing -static --  
you have to add a bunch of other flags (mainly because of  
libibverbs).  But if you've modified the wrapper compiler config file  
to include these extra flags, then *all* apps built with that wrapper  
compiler will be static.  :-(

> We may think about some script or alike that does the necessary work  
> for
> the user, though, and include that in the package. This would need  
> some
> testing and time. Gary, would just providing be static libraries be  
> OK?
> I guess we can do that without too much effort.

Yes; just configuring OMPI with --enable static will provide all the  
Right stuff in terms of the libraries.  OMPI defaults to "--enable- 
shared --disable-static", so you probably want to specify "--disable- 
shared --enable-static" to flip the defaults around.  I don't  
recommend --enable-shared *and* --enable-static, mainly because of the  
issues described above (i.e., you'll get both sets of libraries, but  
there's more configuration choices driven by these switches than just  
whether to build the libraries as .so or .a).

For example, note that OMPI interprets --enable-static as:

- build the libraries as static (libmpi.a and friends)
- slurp all plugins into the libraries
- keep dlopen enabled (i.e., to load any user-specific plugins)

SO: if you configure with --enable-static, then you get no standalone  
plugins.  They're all slurped into the libraries.  You'll get both  
libmpi.so and libmpi.a, but both will contain all the plugins.  That's  
an important consequence.

> As I see it, the options are:
>
>     1. Just build static libs. May need two build cycles, but not too
>        hard to do. (Already done that.) This has the nice effect that
>        no extra action on the user side is necessary.

Yay Libtool here; you don't need two build cycles (see above).   
However, as I stated above, I wouldn't recommend building both static  
and dynamic (unless a user specifically requests it).

Indeed, if you build twice, once with --enable-static + --disable- 
shared and then again with --disable-static + --enable-shared, then  
you'll get a libmpi.a with all plugins slurped, and a libmpi.so with a  
$pkglibdir full of all the plugins.  If these are both installed under  
the same $prefix, then if you compile your MPI app against libmpi.a,  
libmpi.a will contain all the plugins *and* it'll dlopen all the  
plugins that it finds in $pkglibdir at run-time.  Result: badness  
(because this will cause duplicate symbols).

Alternatively, you could --disable-dlopen in the --enable-static + -- 
disable-shared build and prevent the above problem (because libmpi.a  
won't dlopen anything).  But then libmpi.a behaves differently than  
libmpi.so, and that could be problematic / violate the Law of Least  
Astonishment.

Ain't this fun?  :-)

>     2. Define some value that can be passed to the build process, that
>        triggers the build of static libs. We can document it in
>        README.Debian and all users need to do is a dpkg-buildpackage
>        run. GROMACS did this for quite some time, for example, and I
>        always found that convenient enough.
>     3. Just provide dynamic libraries. Easiest to maintain, extra
>        effort for the user. IMHO not acceptable if there is no good
>        technical argument against it, as seems to be the case here.


IMHO 2 or 3 would be fine.  Dynamic builds are probably the most  
common choice, but for those who want static, you might want to give  
them the option for building static.  But unless someone specifies  
multiple different $prefixes or specifically asks for it, I'd only  
install OMPI with shared *or* static libraries: not both.

Hope that helps!  It certainly is a tangled situation; we put a lot of  
effort and discussion into deciding how OMPI would build itself  
because of these kinds of issues...

FWIW, I'm now leaving for the day -- it's after 6pm here (US Eastern  
time).  If it would help to jump on the phone tomorrow and explain  
this stuff further, let me know.  I'm out most of the morning, but  
have time available in the afternoon.

-- 
Jeff Squyres
Cisco Systems







More information about the Pkg-openmpi-maintainers mailing list