[Pkg-xen-devel] (re-titled) partitions and LVs

Ian Campbell ijc at hellion.org.uk
Tue Jan 3 11:46:33 UTC 2012


On Tue, 2012-01-03 at 12:28 +0100, Daniel Pocock wrote:
> >> Can anyone comment on whether both approaches are still generally
> >> considered to be valid, and/or, in which situations?
> > 
> > Both continue to be valid and supported. It's largely a case of personal
> > preference and what fits best with a particular deployment.
> > 
> >> My current understanding of the issue:
> > 
> > That looks about right.
> > 
> > I would just add that the "whole disk" approach is what is tested and
> > works well with Debian Installer. The "individual partitions" approach
> > is more suitable / a better fit for the debbootstrap style of deployment
> > (at least IMHO) but is not tested with D-I.
> 
> All the installs I've done with D-I have been using a partition table
> and nested VG.
> 
> The same point is valid for any other installer too: e.g. the Windows
> installer creates a partition table (and I'm not aware of any
> winbootstrap that offers the option to do otherwise :)

Yes that's true, for almost all HVM guests you will have to use the
full-disk option. TBH I'm not even sure that per-partition thing works
for HVM guests, even if you have PV drivers -- I expect that at least
the boot disk must be full-disk...

> > (I've forgotten all of the context of this mail, was it that something
> > was a bit kooky using the partitions approach with D-I?)
> 
> No, I was just hoping to get clarification of the issue, because I've
> seen it described in different ways.  Also, I imagine that more
> experienced people have comprehensive solutions for this (e.g. how they
> expand their LVs when partition tables are present), but those type of
> important details are not covered in any of the basic documents that
> most users are likely to come across online.  I was hoping to tease out
> some of those details

Hmm, I'm not sure, I expect they do it just like they do on a physical
system -- e.g. by adding a new disk, putting a new PV on that, adding it
to the VG,  resizing the LV and finally resize the filesystem, at least
that's what I would do...

Can you resize a PV with LVM? That would allow you to skip some of those
steps by just resizing the backend device.

> One thing I would like to do is get a wiki page up and running about
> this issue, because it is fundamental for anyway running VMs, and the
> waters are about to be muddied a lot more with the coming of btrfs
> (where md and subvolume behaviour is implemented by the FS, for
> example), so documenting the status quo will help to provide firm ground
> for discussing the next generation of solutions.
> 
> Do you think this stuff belongs in the Debian Xen wiki:
> http://wiki.debian.org/Xen
> 
> or maybe somewhere else?
> http://wiki.debian.org/Linux%20volume%20management
> http://wiki.debian.org/LVM
> 
> or even outside the debian.org space (given it is not just a problem for
> Debian or Xen or LVM)?

wiki.xen.org could be another option? Depends on which angle you end up
approaching it from. 

> >> Benefits of having a partition table on the LV used by the domU:
> >> - some software seems to expect this
> >> - better for situations where each domU has a distinct sys admin (e.g. a
> >> virtual hosting provider where each domU is owned by a different customer)
> >> - better for SANs and iSCSI with many volumes and servers, as the
> >> partition table serves as a kind of label to help identify the filesystem
> >> - domU /etc/fstab can have meaningful device (LV) names if a nested VG
> >> is used
> > 
> > I'm not sure about this last one. Is it just that with the per-partition
> > scheme dom0 will also see the PV (LVM sense of the term) and hence the
> > VG and so you need a naming scheme to keep everything straight in the
> > admin's head and/or prevent naming conflicts?
> 
> In a domU with a nested VG, you might see this in /etc/fstab:
> 
> /dev/xvda1 /boot ext3 defaults 0 0
> /dev/mapper/domUvg-rootfs / ext3 defaults 0 0
> /dev/mapper/domUvg-var /var ext3 defaults 0 0
> 
> In a domU without nested VG (where each dom0 LV is presented as
> /dev/xvdXX), fstab becomes slightly harder to follow:
> 
> /dev/xvda1 /boot ext3 defaults 0 0
> /dev/xvda2 / ext3 defaults 0 0
> /dev/xvda3 /var ext3 defaults 0 0
> 
> and the user must look at the Xen cfg on dom0 to see which LVs are
> mapped to xvda[123]

I see.

> 
> >> Benefits of giving the domU a different dom0 LV for each of it's
> >> filesystems:
> >> - easier to mount/administer from the dom0 (no need for kpartx and
> >> vgimport, lvchange, etc)
> >> - easier to move individual LVs between different domUs (whereas
> >> resizing a partition requires a domU reboot)
> >> - easier to resize (expand or shrink) individual LV allocations on the
> >> dom0s VG (consequently more space efficient)
> > 
> > Note that you can also combine the two schemes, e.g. present xvda as a
> > full disk with a partition table containing the root filesystem and
> > present xvdb1 as a partition, e.g. containing a data partition which you
> > might want to move between VMs.
> 
> In fact, that's what I do now
> 
> I've converted two of my domUs to use the second approach exclusively
> 
> I also have another domU where the main OS filesystems are on an LV with
> a partition table, but a database partition is on a dom0 LV without
> partition table
> 
> Another thing that comes to mind, almost like a third solution: can LVM2
> (or maybe another volume manager, maybe EVMS would have done this) allow
> each domU to see the entire dom0 VG, but only access individual LVs that
> are permitted by some administrative controls?  This would be an
> alternative to declaring the LV mappings in the Xen cfgs.

I'm not aware of any way of doing that. LVM isn't generally safe against
multiple parties accessing it so you would need to be very careful not
to corrupt the metadata etc.

Ian.

-- 
Ian Campbell

If he had only learnt a little less, how infinitely better he might have
taught much more!




More information about the Pkg-xen-devel mailing list