[Pkg-openmpi-maintainers] building package with different libs

Adam C Powell IV hazelsct at debian.org
Wed Nov 19 04:03:23 UTC 2008


On Tue, 2008-11-18 at 23:06 +0100, Manuel Prinz wrote:
> [ Sorry for the long email! I wanted to express my view and as a
> non-native speaker, it's not always easy to be precise. Hope you don't
> mind. ]

No problem at all!

> Am Dienstag, den 18.11.2008, 08:05 -0500 schrieb Adam C Powell IV:
> > On Sat, 2008-11-15 at 21:04 -0600, Dirk Eddelbuettel wrote:
> > > On 14 November 2008 at 23:12, Steve M. Robbins wrote:
> > > | While reading this thread, however, I had an idle thought.  Could we
> > > | prepare an "mpi-default-dev" or "sensible-mpi-dev" package for us to
> > > | build-depend on?  This would be something like the gcc-defaults
> > > | package and simply depend on the appropriate -dev pacakges (OpenMPI on
> > > | some architectures, LAM on the rest).
> > > | 
> > > | The idea is to put the messy details about which architectures support
> > > | OpenMPI and which use LAM in one place.
> > > 
> > > Sounds good to me, and I am cc'ing the pkg-openmpi list. I won't have spare
> > > cycles to work on it, but it strikes as a fundamentally sound suggestion.
> > 
> > I'm all set to make this happen.
> > 
> > Manuel, you mentioned getting OpenMPI to work on all arches as your top
> > priority; what's your expected timeframe?  I know, "when it's ready" or
> > "real soon now".  But if this will happen in plenty of time for packages
> > to transition before squeeze, then there's no point in doing
> > mpi-defaults.
> 
> It's hard to say at the moment because I do not have all the details
> yet. Personally, I'd like to have support on all arches before the
> release of Squeeze, I hope around summer next year, though this might be
> a bit optimistic. I'm currently in the process of getting details about
> what is and what needs to be done; not just with getting OpenMPI to
> build on all arches, but how we can handle the other open issues with
> mixing different MPI implementations.
> 
> What we have so far: We have an untested patch to make OpenMPI build on
> MIPS. It did not apply to the current upstream version and I lately
> tried to update it to the current upstream release. I asked for feedback
> but did not get any so far. (I guess I'll try to build OpenMPI on a MIPS
> as soon as I have some spare cycles.) We also have a patch that makes
> use of libatomic-ops on currently unsupported architectures. It is not
> well tested and may have some itches we need to scratch but it may be
> enough to get OpenMPI to run on all arches. Thanks to everyone involved
> in providing patches and solutions so far! It is very much appreciated.
> 
> So, the honest answer is: I do not have a clue. As said, I'm working on
> it and it is one of the most important things in my Debian work at the
> moment. But we heavily rely on the porters, need testing and need to get
> all MPI maintainers together to sort some other issues out. This takes
> time. Nevertheless, I'm optimistic that we can sort this out before
> Squeeze, including the transitions.

Okay.  I spent a good while this Spring trying to get libatomic-ops to
work, and at this point it doesn't work *anywhere*, and I don't know
enough assembly to make it work.  And that's on the best-case arches,
i386 and amd64, which are missing just one or two primitives each.  ARM,
Sparc and others are nowhere near there, and I don't recall seeing
anything for s390.

Congratulations on the MIPS patch!  I wish you well on the others.  But
won't be chomping at the bit (sorry, English term like a horse biting
the metal bar in its mouth waiting to jump out and start the race)...

> I do not oppose to an mpi-default-dev package, I thought of that myself
> as well. Nevertheless, I also think we can sort that out in time and
> live with the situation as is until then. I will not stop anyone from
> implementing it, though. It might assist developers a lot and is surely
> a Good Thing. But it's just a part of the problem. I also think that a
> huge part of the problem is that MPI maintainers did not talk to each
> other so far; at least such efforts are not known to me. OpenMPI did not
> see much love in Debian for quite some time and we just started to get
> it back on track. We now have a well working team (Thanks, dudes!) and
> that's why I'm optimistic that we can now do the next steps and join the
> efforts of everyone involved in MPI maintainance in Debian.

Okay, I've prepared an ITP and will send it in, and upload, if we decide
to go that route.

> I would suggest that I collect some more information, write it up and we
> discuss things then, so we can agree which road is the best to take. (I
> do not know where the appropriate place for that is, though.) I can't
> promise anything but hope to have that finished within the next two
> weeks. Does this sound acceptable?

Sounds good to me.

> > > And while we're at it, it may also make sense to try to come to a consensus
> > > of our MPI 'preferences' within Debian. I.e. which one should be the default
> > > and own the 'highest' alternatives level.
> > 
> > Good point.  I'm happy to change mpich to below the priority of OpenMPI,
> > maybe the same as lam?  And within mpich, ch_p4 is the "default" but
> > should be below ch_p4mpd (if you install mpd then you're probably
> > running the daemons); I'll put ch_shmem between them.
> 
> We should IMHO base that decision on the fact which MPI implementation
> is the "best". We need to decide on supported arches as well as other
> factors, and I do not have the full picture yet. (If anyone else has,
> please get in touch with me!) I'd suggest to not change the priority as
> of now. Even with my OpenMPI maintainer had on, I'm not yet convinced
> that OpenMPI is the most sensible default.

Okay.  OpenMPI has a lot of advantages, like efficient operation across
multiple levels of multi-process architectures (multithread, multicore,
cluster).  And it has notable disadvantages, like inability to run in a
chroot, which upstream has ignored (bug 494046).

All things considered, OpenMPI has my vote as the most advanced
implementation right now...

-Adam
-- 
GPG fingerprint: D54D 1AEE B11C CE9B A02B  C5DD 526F 01E8 564E E4B6

Engineering consulting with open source tools
http://www.opennovation.com/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
Url : http://lists.alioth.debian.org/pipermail/pkg-openmpi-maintainers/attachments/20081118/e962a9a9/attachment.pgp 


More information about the Pkg-openmpi-maintainers mailing list