<div dir="ltr">Sorry about reporting that already filed issue. </div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Feb 19, 2018 at 12:40 PM, Guido Günther <span dir="ltr"><<a href="mailto:agx@sigxcpu.org" target="_blank">agx@sigxcpu.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">control: forcemerge -1 890821<br>
<br>
Hi Jean-Christophe,<br>
you filed several FTBS bugs during the last months and these were either<br>
related to your build environment or already known. Please check more<br>
carefully before filing reports in the future. If you have trouble<br>
building and are unsure check<br>
<a href="mailto:pkg-libvirt-discuss@lists.alioth.debian.org">pkg-libvirt-discuss@lists.<wbr>alioth.debian.org</a>.<br>
Cheers,<br>
-- Guido<br>
<br>
<br>
On Mon, Feb 19, 2018 at 12:22:41PM +0100, jean-christophe manciot wrote:<br>
> Package: virt-manager<br>
> Version: 1:1.4.3-1<br>
> Building the sources in a sid chroot with:<br>
> debuild -i -I --no-sign --build=binary -j1 <br>
> leads to:<br>
> ...<br>
> ==============================<wbr>==============================<wbr>==========<br>
> ERROR: testCloneGraphicsPassword (tests.clonetest.TestClone)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 193, in testCloneGraphicsPassword<br>
> self._clone_helper(base)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 72, in _clone_helper<br>
> cloneobj = self._default_clone_values(<wbr>cloneobj, disks)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 94, in _default_clone_values<br>
> cloneobj.clone_paths = disks<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/cloner.py",<br>
> line 153, in set_clone_paths<br>
> (path, str(e)))<br>
> ValueError: Could not use path '/clone3' for cloning: Could not define<br>
> storage pool: operation failed: pool 'tmp' is already defined with uuid<br>
> 00000000-1111-2222-3333-<wbr>444444444444<br>
> ==============================<wbr>==============================<wbr>==========<br>
> ERROR: testCloneNvramAuto (tests.clonetest.TestClone)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 185, in testCloneNvramAuto<br>
> self._clone_helper(base)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 72, in _clone_helper<br>
> cloneobj = self._default_clone_values(<wbr>cloneobj, disks)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 94, in _default_clone_values<br>
> cloneobj.clone_paths = disks<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/cloner.py",<br>
> line 153, in set_clone_paths<br>
> (path, str(e)))<br>
> ValueError: Could not use path '/clone3' for cloning: Could not define<br>
> storage pool: operation failed: pool 'tmp' is already defined with uuid<br>
> 00000000-1111-2222-3333-<wbr>444444444444<br>
> ==============================<wbr>==============================<wbr>==========<br>
> ERROR: testCloneNvramNewpool (tests.clonetest.TestClone)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 189, in testCloneNvramNewpool<br>
> self._clone_helper(base)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 72, in _clone_helper<br>
> cloneobj = self._default_clone_values(<wbr>cloneobj, disks)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 94, in _default_clone_values<br>
> cloneobj.clone_paths = disks<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/cloner.py",<br>
> line 153, in set_clone_paths<br>
> (path, str(e)))<br>
> ValueError: Could not use path '/clone3' for cloning: Could not define<br>
> storage pool: operation failed: pool 'tmp' is already defined with uuid<br>
> 00000000-1111-2222-3333-<wbr>444444444444<br>
> ==============================<wbr>==============================<wbr>==========<br>
> ERROR: testCloneStorageForce (tests.clonetest.TestClone)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 164, in testCloneStorageForce<br>
> force_list=["hda", "fdb", "sdb"])<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 76, in _clone_helper<br>
> clone_disks_file=clone_disks_<wbr>file)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clonetest.py",<br>
> line 101, in _clone_compare<br>
> cloneobj.setup_original()<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/cloner.py",<br>
> line 295, in setup_original<br>
> self._original_disks = self._get_original_disks_info(<wbr>)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/cloner.py",<br>
> line 583, in _get_original_disks_info<br>
> "information: %s" % str(e)))<br>
> ValueError: Could not determine original disk information: Could not<br>
> define storage pool: operation failed: pool 'dirpool' is already defined<br>
> with uuid 00000000-1111-2222-3333-<wbr>444444444444<br>
> ==============================<wbr>==============================<wbr>==========<br>
> ERROR: testEnumerateLogical (tests.storage.TestStorage)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/storage.py",<br>
> line 228, in testEnumerateLogical<br>
> self._enumerateCompare(name, lst)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/storage.py",<br>
> line 222, in _enumerateCompare<br>
> poolCompare(pool)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/storage.py",<br>
> line 86, in poolCompare<br>
> return pool_inst.install(build=True, meter=None, create=True)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/storage.py",<br>
> line 538, in install<br>
> raise RuntimeError(_("Could not define storage pool: %s") % str(e))<br>
> RuntimeError: Could not define storage pool: operation failed: pool<br>
> 'pool-logical-list0' is already defined with uuid<br>
> 10811110-3105-9997-1084-<wbr>510810511511<br>
> ==============================<wbr>==============================<wbr>==========<br>
> ERROR: testDiskConvert (tests.virtconvtest.<wbr>TestVirtConv)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/virtconvtest.py",<br>
> line 92, in testDiskConvert<br>
> base_dir + "ovf_input/test1.ovf", "ovf", disk_format="qcow2")<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/virtconvtest.py",<br>
> line 72, in _compare_single_file<br>
> self._convert_helper(in_path, out_path, in_type, disk_format)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/virtconvtest.py",<br>
> line 49, in _convert_helper<br>
> converter.convert_disks(disk_<wbr>format, dry=True)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtconv/formats.py",<br>
> line 337, in convert_disks<br>
> disk.path = newpath<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/devicedisk.py",<br>
> line 510, in _set_path<br>
> (vol_object, parent_pool) = diskbackend.manage_path(self.<wbr>conn,<br>
> newpath)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/diskbackend.py",<br>
> line 177, in manage_path<br>
> pool = poolxml.install(build=False, create=True, autostart=True)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/storage.py",<br>
> line 538, in install<br>
> raise RuntimeError(_("Could not define storage pool: %s") % str(e))<br>
> RuntimeError: Could not define storage pool: operation failed: pool<br>
> 'ovf_input' is already defined with uuid<br>
> 00000000-1111-2222-3333-<wbr>444444444444<br>
> ==============================<wbr>==============================<wbr>==========<br>
> ERROR: testOVF2Libvirt (tests.virtconvtest.<wbr>TestVirtConv)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/virtconvtest.py",<br>
> line 86, in testOVF2Libvirt<br>
> self._compare_files("ovf")<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/virtconvtest.py",<br>
> line 83, in _compare_files<br>
> self._compare_single_file(in_<wbr>path, in_type)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/virtconvtest.py",<br>
> line 72, in _compare_single_file<br>
> self._convert_helper(in_path, out_path, in_type, disk_format)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/virtconvtest.py",<br>
> line 49, in _convert_helper<br>
> converter.convert_disks(disk_<wbr>format, dry=True)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtconv/formats.py",<br>
> line 337, in convert_disks<br>
> disk.path = newpath<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/devicedisk.py",<br>
> line 510, in _set_path<br>
> (vol_object, parent_pool) = diskbackend.manage_path(self.<wbr>conn,<br>
> newpath)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/diskbackend.py",<br>
> line 177, in manage_path<br>
> pool = poolxml.install(build=False, create=True, autostart=True)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/storage.py",<br>
> line 538, in install<br>
> raise RuntimeError(_("Could not define storage pool: %s") % str(e))<br>
> RuntimeError: Could not define storage pool: operation failed: pool<br>
> 'ovf_directory' is already defined with uuid<br>
> 00000000-1111-2222-3333-<wbr>444444444444<br>
> ==============================<wbr>==============================<wbr>==========<br>
> ERROR: testVMX2Libvirt (tests.virtconvtest.<wbr>TestVirtConv)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/virtconvtest.py",<br>
> line 88, in testVMX2Libvirt<br>
> self._compare_files("vmx")<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/virtconvtest.py",<br>
> line 83, in _compare_files<br>
> self._compare_single_file(in_<wbr>path, in_type)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/virtconvtest.py",<br>
> line 72, in _compare_single_file<br>
> self._convert_helper(in_path, out_path, in_type, disk_format)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/virtconvtest.py",<br>
> line 49, in _convert_helper<br>
> converter.convert_disks(disk_<wbr>format, dry=True)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtconv/formats.py",<br>
> line 337, in convert_disks<br>
> disk.path = newpath<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/devicedisk.py",<br>
> line 510, in _set_path<br>
> (vol_object, parent_pool) = diskbackend.manage_path(self.<wbr>conn,<br>
> newpath)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/diskbackend.py",<br>
> line 177, in manage_path<br>
> pool = poolxml.install(build=False, create=True, autostart=True)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/storage.py",<br>
> line 538, in install<br>
> raise RuntimeError(_("Could not define storage pool: %s") % str(e))<br>
> RuntimeError: Could not define storage pool: operation failed: pool<br>
> 'vmx_input' is already defined with uuid<br>
> 00000000-1111-2222-3333-<wbr>444444444444<br>
> ==============================<wbr>==============================<wbr>==========<br>
> ERROR: testRBDPool (tests.xmlparse.XMLParseTest)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/xmlparse.py",<br>
> line 1197, in testRBDPool<br>
> utils.test_create(conn, pool.get_xml_config(), "storagePoolDefineXML")<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/utils.py",<br>
> line 149, in test_create<br>
> raise RuntimeError(str(e) + "\n" + xml)<br>
> RuntimeError: operation failed: pool 'rbd-ceph' is already defined with<br>
> uuid 4bcd023e-990e-fcf6-d95c-<wbr>52dd0cd938c8<br>
> <pool type="rbd"><br>
> <name>rbd-pool</name><br>
> <uuid>4bcd023e-990e-fcf6-d95c-<wbr>52dd0cd938c8</uuid><br>
> <capacity unit="bytes">47256127143936</<wbr>capacity><br>
> <allocation unit="bytes">5537792235090</<wbr>allocation><br>
> <available unit="bytes">35978000121856</<wbr>available><br>
> <source><br>
> <host name="[1]<a href="http://ceph-mon-1.example.com" rel="noreferrer" target="_blank">ceph-mon-1.example.<wbr>com</a>" port="1234"/><br>
> <host name="foo.bar" port="6789"/><br>
> <host name="[2]<a href="http://ceph-mon-3.example.com" rel="noreferrer" target="_blank">ceph-mon-3.example.<wbr>com</a>" port="1000"/><br>
> <name>rbd</name><br>
> <host name="frobber" port="5555"/><br>
> </source><br>
> </pool><br>
> ==============================<wbr>==============================<wbr>==========<br>
> FAIL: testCLI0003virt_install_many_<wbr>devices (tests.clitest.CLITests)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 1093, in <lambda><br>
> return lambda s: cmdtemplate(s, cmd)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 1092, in cmdtemplate<br>
> _cmdobj.run(self)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 270, in run<br>
> tests.fail(err)<br>
> AssertionError: ./virt-install --name foobar --ram 64 --print-step all<br>
> --connect<br>
> __virtinst_test__test:////<wbr>home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>testdriver.xml,predictable,<wbr>qemu,domcaps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64-<wbr>domcaps.xml,caps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64.<wbr>xml<br>
> --noautoconsole --os-variant fedora20 --vcpus 4,cores=1,placement=static<br>
> --cpu none --disk<br>
> /dev/default-pool/UPPER,cache=<wbr>writeback,io=threads,perms=sh,<wbr>serial=WD-WMAP9A966149,boot_<wbr>order=2<br>
> --disk<br>
> /dev/default-pool/new1.img,<wbr>sparse=false,size=.001,perms=<wbr>ro,error_policy=enospace,<wbr>discard=unmap,detect_zeroes=<wbr>yes<br>
> --disk<br>
> device=cdrom,bus=sata,read_<wbr>bytes_sec=1,read_iops_sec=2,<wbr>total_bytes_sec=10,total_iops_<wbr>sec=20,write_bytes_sec=5,<wbr>write_iops_sec=6<br>
> --disk size=1 --disk /iscsi-pool/diskvol1 --disk<br>
> /dev/default-pool/iso-vol,<wbr>seclabel.model=dac,seclabel1.<wbr>model=selinux,seclabel1.<wbr>relabel=no,seclabel0.label=<wbr>foo,bar,baz<br>
> --disk /dev/default-pool/iso-vol,<wbr>format=qcow2 --disk<br>
> source_pool=rbd-ceph,source_<wbr>volume=some-rbd-vol,size=.1 --disk<br>
> pool=rbd-ceph,size=.1 --disk<br>
> source_protocol=http,source_<wbr>host_name=[3]<a href="http://example.com" rel="noreferrer" target="_blank">example.com</a>,<wbr>source_host_port=8000,source_<wbr>name=/path/to/my/file<br>
> --disk<br>
> source_protocol=nbd,source_<wbr>host_transport=unix,source_<wbr>host_socket=/tmp/socket,bus=<wbr>scsi,logical_block_size=512,<wbr>physical_block_size=512<br>
> --disk gluster://[4]<a href="http://192.168.1.100/test-volume/some/dir/test-gluster.qcow2" rel="noreferrer" target="_blank">192.168.1.100/<wbr>test-volume/some/dir/test-<wbr>gluster.qcow2</a><br>
> --disk qemu+nbd:///var/foo/bar/<wbr>socket,bus=usb,removable=on --disk<br>
> path=http://[1:2:3:4:1:2:3:4]:<wbr>5522/my/path?query=foo --disk<br>
> vol=gluster-pool/test-gluster.<wbr>raw,startup_policy=optional --disk<br>
> /var,device=floppy,address.<wbr>type=ccw,address.cssid=0xfe,<wbr>address.ssid=0,address.devno=<wbr>01<br>
> --disk<br>
> /dev/default-pool/new2.img,<wbr>size=1,backing_store=/tmp/foo.<wbr>img,backing_format=vmdk<br>
> --disk /tmp/brand-new.img,size=1,<wbr>backing_store=/dev/default-<wbr>pool/iso-vol<br>
> --network<br>
> user,mac=12:34:56:78:11:22,<wbr>portgroup=foo,link_state=down,<wbr>rom_bar=on,rom_file=/tmp/foo<br>
> --network bridge=foobar,model=virtio,<wbr>driver_name=qemu,driver_<wbr>queues=3<br>
> --network<br>
> bridge=ovsbr,virtualport_type=<wbr>openvswitch,virtualport_<wbr>profileid=demo,virtualport_<wbr>interfaceid=09b11c53-8b5c-<wbr>4eeb-8f00-d84eaa0aaa3b,link_<wbr>state=yes<br>
> --network<br>
> type=direct,source=eth5,<wbr>source_mode=vepa,target=<wbr>mytap12,virtualport_type=802.<wbr>1Qbg,virtualport_managerid=12,<wbr>virtualport_typeid=1193046,<wbr>virtualport_typeidversion=1,<wbr>virtualport_instanceid=<wbr>09b11c53-8b5c-4eeb-8f00-<wbr>d84eaa0aaa3b,boot_order=1,<wbr>trustGuestRxFilters=yes<br>
> --network user,model=virtio,address.<wbr>type=spapr-vio,address.reg=<wbr>0x500<br>
> --network<br>
> vhostuser,source_type=unix,<wbr>source_path=/tmp/vhost1.sock,<wbr>source_mode=server,model=<wbr>virtio<br>
> --graphics sdl --graphics spice,keymap=none --graphics<br>
> vnc,port=5950,listen=1.2.3.4,<wbr>keymap=ja,password=foo --graphics<br>
> spice,port=5950,tlsport=5950,<wbr>listen=1.2.3.4,keymap=ja --graphics<br>
> spice,image_compression=foo,<wbr>streaming_mode=bar,clipboard_<wbr>copypaste=yes,mouse_mode=<wbr>client,filetransfer_enable=on<br>
> --graphics spice,gl=yes,listen=socket --graphics spice,gl=yes,listen=none<br>
> --graphics spice,gl=yes,listen=none,<wbr>rendernode=/dev/dri/foo --graphics<br>
> spice,listens0.type=address,<wbr>listens0.address=1.2.3.4 --graphics<br>
> spice,listens0.type=network,<wbr>listens0.network=default --graphics<br>
> spice,listens0.type=socket,<wbr>listens0.socket=/tmp/foobar --controller<br>
> usb,model=ich9-ehci1,address=<wbr>0:0:4.7,index=0 --controller<br>
> usb,model=ich9-uhci1,address=<wbr>0:0:4.0,index=0,master=0 --controller<br>
> usb,model=ich9-uhci2,address=<wbr>0:0:4.1,index=0,master=2 --controller<br>
> usb,model=ich9-uhci3,address=<wbr>0:0:4.2,index=0,master=4 --input<br>
> type=keyboard,bus=usb --input tablet --serial<br>
> tcp,host=:2222,mode=bind,<wbr>protocol=telnet,log_file=/tmp/<wbr>foo.log,log_append=yes<br>
> --serial nmdm,source.master=/dev/foo1,<wbr>source.slave=/dev/foo2 --parallel<br>
> udp,host=[5]<a href="http://0.0.0.0:1234" rel="noreferrer" target="_blank">0.0.0.0:1234</a>,bind_<wbr>host=[6]<a href="http://127.0.0.1:1234" rel="noreferrer" target="_blank">127.0.0.1:1234</a> --parallel<br>
> unix,path=/tmp/foo-socket --channel<br>
> pty,target_type=guestfwd,<wbr>target_address=[7]<a href="http://127.0.0.1:10000" rel="noreferrer" target="_blank">127.0.0.1:<wbr>10000</a> --channel<br>
> pty,target_type=virtio,name=<wbr>org.linux-kvm.port1 --console<br>
> pty,target_type=virtio --channel spicevmc --hostdev<br>
> net_00_1c_25_10_b1_e4,boot_<wbr>order=4,rom_bar=off --host-device<br>
> usb_device_781_5151_<wbr>2004453082054CA1BEEE --host-device 001.003 --hostdev<br>
> 15:0.1 --host-device 2:15:0.2 --hostdev<br>
> 0:15:0.3,address.type=isa,<wbr>address.iobase=0x500,address.<wbr>irq=5 --host-device<br>
> 0x0781:0x5151,driver_name=vfio --host-device 04b3:4485 --host-device<br>
> pci_8086_2829_scsi_host_scsi_<wbr>device_lun0 --hostdev usb_5_20 --hostdev<br>
> usb_5_21 <br>
> --filesystem /source,/target --filesystem<br>
> template_name,/,type=template,<wbr>mode=passthrough --filesystem<br>
> type=file,source=/tmp/<wbr>somefile.img,target=/mount/<wbr>point,accessmode=squash<br>
> --soundhw default --sound ac97 --video cirrus --video<br>
> model=qxl,vgamem=1,ram=2,vram=<wbr>3,heads=4,accel3d=yes,vram64=<wbr>65 --smartcard<br>
> passthrough,type=spicevmc --smartcard type=host --redirdev<br>
> usb,type=spicevmc --redirdev usb,type=tcp,server=localhost:<wbr>4000 --redirdev<br>
> usb,type=tcp,server=[8]<a href="http://127.0.0.1:4002" rel="noreferrer" target="_blank">127.0.<wbr>0.1:4002</a>,boot_order=3 --rng<br>
> egd,backend_host=127.0.0.1,<wbr>backend_service=8000,backend_<wbr>type=tcp --panic<br>
> iobase=507 --qemu-commandline env=DISPLAY=:0.1<br>
> --qemu-commandline="-display gtk,gl=on" --qemu-commandline="-device<br>
> vfio-pci,addr=05.0,sysfsdev=/<wbr>sys/class/mdev_bus/0000:00:02.<wbr>0/f321853c-c584-4a6b-b99a-<wbr>3eee22a3919c"<br>
> --qemu-commandline="-set device.video0.driver=virtio-<wbr>vga" <br>
> Expected command to pass, but it didn't.<br>
> Command was: ./virt-install --name foobar --ram 64 --print-step all<br>
> --connect<br>
> __virtinst_test__test:////<wbr>home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>testdriver.xml,predictable,<wbr>qemu,domcaps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64-<wbr>domcaps.xml,caps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64.<wbr>xml<br>
> --noautoconsole --os-variant fedora20 --vcpus 4,cores=1,placement=static<br>
> --cpu none --disk<br>
> /dev/default-pool/UPPER,cache=<wbr>writeback,io=threads,perms=sh,<wbr>serial=WD-WMAP9A966149,boot_<wbr>order=2<br>
> --disk<br>
> /dev/default-pool/new1.img,<wbr>sparse=false,size=.001,perms=<wbr>ro,error_policy=enospace,<wbr>discard=unmap,detect_zeroes=<wbr>yes<br>
> --disk<br>
> device=cdrom,bus=sata,read_<wbr>bytes_sec=1,read_iops_sec=2,<wbr>total_bytes_sec=10,total_iops_<wbr>sec=20,write_bytes_sec=5,<wbr>write_iops_sec=6<br>
> --disk size=1 --disk /iscsi-pool/diskvol1 --disk<br>
> /dev/default-pool/iso-vol,<wbr>seclabel.model=dac,seclabel1.<wbr>model=selinux,seclabel1.<wbr>relabel=no,seclabel0.label=<wbr>foo,bar,baz<br>
> --disk /dev/default-pool/iso-vol,<wbr>format=qcow2 --disk<br>
> source_pool=rbd-ceph,source_<wbr>volume=some-rbd-vol,size=.1 --disk<br>
> pool=rbd-ceph,size=.1 --disk<br>
> source_protocol=http,source_<wbr>host_name=[9]<a href="http://example.com" rel="noreferrer" target="_blank">example.com</a>,<wbr>source_host_port=8000,source_<wbr>name=/path/to/my/file<br>
> --disk<br>
> source_protocol=nbd,source_<wbr>host_transport=unix,source_<wbr>host_socket=/tmp/socket,bus=<wbr>scsi,logical_block_size=512,<wbr>physical_block_size=512<br>
> --disk gluster://[10]<a href="http://192.168.1.100/test-volume/some/dir/test-gluster.qcow2" rel="noreferrer" target="_blank">192.168.1.100/<wbr>test-volume/some/dir/test-<wbr>gluster.qcow2</a><br>
> --disk qemu+nbd:///var/foo/bar/<wbr>socket,bus=usb,removable=on --disk<br>
> path=http://[1:2:3:4:1:2:3:4]:<wbr>5522/my/path?query=foo --disk<br>
> vol=gluster-pool/test-gluster.<wbr>raw,startup_policy=optional --disk<br>
> /var,device=floppy,address.<wbr>type=ccw,address.cssid=0xfe,<wbr>address.ssid=0,address.devno=<wbr>01<br>
> --disk<br>
> /dev/default-pool/new2.img,<wbr>size=1,backing_store=/tmp/foo.<wbr>img,backing_format=vmdk<br>
> --disk /tmp/brand-new.img,size=1,<wbr>backing_store=/dev/default-<wbr>pool/iso-vol<br>
> --network<br>
> user,mac=12:34:56:78:11:22,<wbr>portgroup=foo,link_state=down,<wbr>rom_bar=on,rom_file=/tmp/foo<br>
> --network bridge=foobar,model=virtio,<wbr>driver_name=qemu,driver_<wbr>queues=3<br>
> --network<br>
> bridge=ovsbr,virtualport_type=<wbr>openvswitch,virtualport_<wbr>profileid=demo,virtualport_<wbr>interfaceid=09b11c53-8b5c-<wbr>4eeb-8f00-d84eaa0aaa3b,link_<wbr>state=yes<br>
> --network<br>
> type=direct,source=eth5,<wbr>source_mode=vepa,target=<wbr>mytap12,virtualport_type=802.<wbr>1Qbg,virtualport_managerid=12,<wbr>virtualport_typeid=1193046,<wbr>virtualport_typeidversion=1,<wbr>virtualport_instanceid=<wbr>09b11c53-8b5c-4eeb-8f00-<wbr>d84eaa0aaa3b,boot_order=1,<wbr>trustGuestRxFilters=yes<br>
> --network user,model=virtio,address.<wbr>type=spapr-vio,address.reg=<wbr>0x500<br>
> --network<br>
> vhostuser,source_type=unix,<wbr>source_path=/tmp/vhost1.sock,<wbr>source_mode=server,model=<wbr>virtio<br>
> --graphics sdl --graphics spice,keymap=none --graphics<br>
> vnc,port=5950,listen=1.2.3.4,<wbr>keymap=ja,password=foo --graphics<br>
> spice,port=5950,tlsport=5950,<wbr>listen=1.2.3.4,keymap=ja --graphics<br>
> spice,image_compression=foo,<wbr>streaming_mode=bar,clipboard_<wbr>copypaste=yes,mouse_mode=<wbr>client,filetransfer_enable=on<br>
> --graphics spice,gl=yes,listen=socket --graphics spice,gl=yes,listen=none<br>
> --graphics spice,gl=yes,listen=none,<wbr>rendernode=/dev/dri/foo --graphics<br>
> spice,listens0.type=address,<wbr>listens0.address=1.2.3.4 --graphics<br>
> spice,listens0.type=network,<wbr>listens0.network=default --graphics<br>
> spice,listens0.type=socket,<wbr>listens0.socket=/tmp/foobar --controller<br>
> usb,model=ich9-ehci1,address=<wbr>0:0:4.7,index=0 --controller<br>
> usb,model=ich9-uhci1,address=<wbr>0:0:4.0,index=0,master=0 --controller<br>
> usb,model=ich9-uhci2,address=<wbr>0:0:4.1,index=0,master=2 --controller<br>
> usb,model=ich9-uhci3,address=<wbr>0:0:4.2,index=0,master=4 --input<br>
> type=keyboard,bus=usb --input tablet --serial<br>
> tcp,host=:2222,mode=bind,<wbr>protocol=telnet,log_file=/tmp/<wbr>foo.log,log_append=yes<br>
> --serial nmdm,source.master=/dev/foo1,<wbr>source.slave=/dev/foo2 --parallel<br>
> udp,host=[11]<a href="http://0.0.0.0:1234" rel="noreferrer" target="_blank">0.0.0.0:1234</a>,<wbr>bind_host=[12]<a href="http://127.0.0.1:1234" rel="noreferrer" target="_blank">127.0.0.1:1234</a> --parallel<br>
> unix,path=/tmp/foo-socket --channel<br>
> pty,target_type=guestfwd,<wbr>target_address=[13]<a href="http://127.0.0.1:10000" rel="noreferrer" target="_blank">127.0.0.1:<wbr>10000</a> --channel<br>
> pty,target_type=virtio,name=<wbr>org.linux-kvm.port1 --console<br>
> pty,target_type=virtio --channel spicevmc --hostdev<br>
> net_00_1c_25_10_b1_e4,boot_<wbr>order=4,rom_bar=off --host-device<br>
> usb_device_781_5151_<wbr>2004453082054CA1BEEE --host-device 001.003 --hostdev<br>
> 15:0.1 --host-device 2:15:0.2 --hostdev<br>
> 0:15:0.3,address.type=isa,<wbr>address.iobase=0x500,address.<wbr>irq=5 --host-device<br>
> 0x0781:0x5151,driver_name=vfio --host-device 04b3:4485 --host-device<br>
> pci_8086_2829_scsi_host_scsi_<wbr>device_lun0 --hostdev usb_5_20 --hostdev<br>
> usb_5_21 <br>
> --filesystem /source,/target --filesystem<br>
> template_name,/,type=template,<wbr>mode=passthrough --filesystem<br>
> type=file,source=/tmp/<wbr>somefile.img,target=/mount/<wbr>point,accessmode=squash<br>
> --soundhw default --sound ac97 --video cirrus --video<br>
> model=qxl,vgamem=1,ram=2,vram=<wbr>3,heads=4,accel3d=yes,vram64=<wbr>65 --smartcard<br>
> passthrough,type=spicevmc --smartcard type=host --redirdev<br>
> usb,type=spicevmc --redirdev usb,type=tcp,server=localhost:<wbr>4000 --redirdev<br>
> usb,type=tcp,server=[14]<a href="http://127.0.0.1:4002" rel="noreferrer" target="_blank">127.0.<wbr>0.1:4002</a>,boot_order=3 --rng<br>
> egd,backend_host=127.0.0.1,<wbr>backend_service=8000,backend_<wbr>type=tcp --panic<br>
> iobase=507 --qemu-commandline env=DISPLAY=:0.1<br>
> --qemu-commandline="-display gtk,gl=on" --qemu-commandline="-device<br>
> vfio-pci,addr=05.0,sysfsdev=/<wbr>sys/class/mdev_bus/0000:00:02.<wbr>0/f321853c-c584-4a6b-b99a-<wbr>3eee22a3919c"<br>
> --qemu-commandline="-set device.video0.driver=virtio-<wbr>vga" <br>
> Error code : -1<br>
> Output was:<br>
> ERROR Error: --disk<br>
> /var,device=floppy,address.<wbr>type=ccw,address.cssid=0xfe,<wbr>address.ssid=0,address.devno=<wbr>01:<br>
> Could not define storage pool: operation failed: pool 'default' is already<br>
> defined with uuid 00000000-1111-2222-3333-<wbr>444444444444<br>
> ==============================<wbr>==============================<wbr>==========<br>
> FAIL: testCLI0210./virt_clone (tests.clitest.CLITests)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 1093, in <lambda><br>
> return lambda s: cmdtemplate(s, cmd)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 1092, in cmdtemplate<br>
> _cmdobj.run(self)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 270, in run<br>
> tests.fail(err)<br>
> AssertionError: ./virt-clone --debug --connect<br>
> __virtinst_test__test:////<wbr>home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>testdriver.xml,predictable<br>
> -n clonetest --original-xml tests/cli-test-xml/clone-disk.<wbr>xml --file<br>
> virt-install --file /dev/default-pool/testvol1.img --preserve<br>
> Expected command to pass, but it didn't.<br>
> Command was: ./virt-clone --debug --connect<br>
> __virtinst_test__test:////<wbr>home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>testdriver.xml,predictable<br>
> -n clonetest --original-xml tests/cli-test-xml/clone-disk.<wbr>xml --file<br>
> virt-install --file /dev/default-pool/testvol1.img --preserve<br>
> Error code : -1<br>
> Output was:<br>
> [Mon, 19 Feb 2018 11:07:10 virt-clone 14011] DEBUG (cli:264) Launched with<br>
> command line:<br>
> /home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/virt-<wbr>clone<br>
> --debug --connect<br>
> __virtinst_test__test:////<wbr>home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>testdriver.xml,predictable<br>
> -n clonetest --original-xml tests/cli-test-xml/clone-disk.<wbr>xml --file<br>
> virt-install --file /dev/default-pool/testvol1.img --preserve<br>
> [Mon, 19 Feb 2018 11:07:10 virt-clone 14011] DEBUG (cloner:278) Validating<br>
> original guest parameters<br>
> [Mon, 19 Feb 2018 11:07:10 virt-clone 14011] DEBUG (cloner:288) Original<br>
> XML:<br>
> <domain type='test' id='1'><br>
> <name>origtest</name><br>
> <uuid>db69fa1f-eef0-e567-3c20-<wbr>3ef16f10376b</uuid><br>
> <memory>8388608</memory><br>
> <currentMemory>2097152</<wbr>currentMemory><br>
> <vcpu>2</vcpu><br>
> <os><br>
> <type arch='i686'>hvm</type><br>
> <boot dev='hd'/><br>
> </os><br>
> <clock offset='utc'/><br>
> <on_poweroff>destroy</on_<wbr>poweroff><br>
> <on_reboot>restart</on_reboot><br>
> <on_crash>destroy</on_crash><br>
> <devices><br>
> <disk type='file' device='disk'><br>
> <target dev='hda' bus='ide'/><br>
> <source file='/tmp/__virtinst_cli_<wbr>exist1.img'/><br>
> </disk><br>
> <disk type='file' device='disk'><br>
> <target dev='hdb' bus='ide'/><br>
> <source file='/tmp/__virtinst_cli_<wbr>exist2.img'/><br>
> </disk><br>
> <disk type='file' device='cdrom'><br>
> <target dev='hdc' bus='ide'/><br>
> <source file='/tmp/__virtinst_cli_<wbr>exist2.img'/><br>
> <readonly/><br>
> </disk><br>
> <disk type='file' device='floppy'><br>
> <target dev='fda' bus='fdc'/><br>
> <readonly/><br>
> </disk><br>
> </devices><br>
> </domain><br>
> [Mon, 19 Feb 2018 11:07:10 virt-clone 14011] DEBUG (diskbackend:171)<br>
> Attempting to build pool=tmp target=/tmp<br>
> [Mon, 19 Feb 2018 11:07:10 virt-clone 14011] DEBUG (storage:531) Creating<br>
> storage pool 'tmp' with xml:<br>
> <pool type="dir"><br>
> <name>tmp</name><br>
> <uuid>00000000-1111-2222-3333-<wbr>444444444444</uuid><br>
> <target><br>
> <path>/tmp</path><br>
> </target><br>
> </pool><br>
> [Mon, 19 Feb 2018 11:07:10 virt-clone 14011] DEBUG (cloner:298) Original<br>
> paths: ['/tmp/__virtinst_cli_exist1.<wbr>img',<br>
> '/tmp/__virtinst_cli_exist2.<wbr>img']<br>
> [Mon, 19 Feb 2018 11:07:10 virt-clone 14011] DEBUG (cloner:300) Original<br>
> sizes: [0.0, 0.0]<br>
> [Mon, 19 Feb 2018 11:07:10 virt-clone 14011] DEBUG (diskbackend:171)<br>
> Attempting to build pool=virt-manager-1.4.3-1<br>
> target=/home/actionmystique/<wbr>src/Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1<br>
> [Mon, 19 Feb 2018 11:07:10 virt-clone 14011] DEBUG (storage:531) Creating<br>
> storage pool 'virt-manager-1.4.3-1' with xml:<br>
> <pool type="dir"><br>
> <name>virt-manager-1.4.3-1</<wbr>name><br>
> <uuid>00000000-1111-2222-3333-<wbr>444444444444</uuid><br>
> <target><br>
> <br>
> <path>/home/actionmystique/<wbr>src/Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1</<wbr>path><br>
> </target><br>
> </pool><br>
> [Mon, 19 Feb 2018 11:07:10 virt-clone 14011] DEBUG (cloner:151) Error<br>
> setting clone path.<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/cloner.py",<br>
> line 139, in set_clone_paths<br>
> disk.path = path<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/devicedisk.py",<br>
> line 510, in _set_path<br>
> (vol_object, parent_pool) = diskbackend.manage_path(self.<wbr>conn,<br>
> newpath)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/diskbackend.py",<br>
> line 177, in manage_path<br>
> pool = poolxml.install(build=False, create=True, autostart=True)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/storage.py",<br>
> line 538, in install<br>
> raise RuntimeError(_("Could not define storage pool: %s") % str(e))<br>
> RuntimeError: Could not define storage pool: operation failed: pool 'tmp'<br>
> is already defined with uuid 00000000-1111-2222-3333-<wbr>444444444444<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 159, in _launch_command<br>
> ret = virtclone.main(conn=conn)<br>
> File "virt-clone", line 204, in main<br>
> not options.preserve, options.auto_clone)<br>
> File "virt-clone", line 92, in get_clone_diskfile<br>
> design.clone_paths = clonepaths<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/cloner.py",<br>
> line 153, in set_clone_paths<br>
> (path, str(e)))<br>
> ValueError: Could not use path 'virt-install' for cloning: Could not<br>
> define storage pool: operation failed: pool 'tmp' is already defined with<br>
> uuid 00000000-1111-2222-3333-<wbr>444444444444<br>
> ==============================<wbr>==============================<wbr>==========<br>
> FAIL: testCLI0227virt_convert_vmx_<wbr>compare (tests.clitest.CLITests)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 1093, in <lambda><br>
> return lambda s: cmdtemplate(s, cmd)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 1092, in cmdtemplate<br>
> _cmdobj.run(self)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 270, in run<br>
> tests.fail(err)<br>
> AssertionError: ./virt-convert --connect<br>
> __virtinst_test__test:////<wbr>home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>testdriver.xml,predictable,<wbr>qemu,domcaps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64-<wbr>domcaps.xml,caps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64.<wbr>xml<br>
> --dry<br>
> /home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>virtconv-files/vmx_input/<wbr>test1.vmx<br>
> --disk-format qcow2 --print-xml<br>
> Expected command to pass, but it didn't.<br>
> Command was: ./virt-convert --connect<br>
> __virtinst_test__test:////<wbr>home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>testdriver.xml,predictable,<wbr>qemu,domcaps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64-<wbr>domcaps.xml,caps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64.<wbr>xml<br>
> --dry<br>
> /home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>virtconv-files/vmx_input/<wbr>test1.vmx<br>
> --disk-format qcow2 --print-xml<br>
> Error code : -1<br>
> Output was:<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 161, in _launch_command<br>
> ret = virtconvert.main(conn=conn)<br>
> File "virt-convert", line 111, in main<br>
> destdir=options.destination, dry=options.dry)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtconv/formats.py",<br>
> line 337, in convert_disks<br>
> disk.path = newpath<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/devicedisk.py",<br>
> line 510, in _set_path<br>
> (vol_object, parent_pool) = diskbackend.manage_path(self.<wbr>conn,<br>
> newpath)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/diskbackend.py",<br>
> line 177, in manage_path<br>
> pool = poolxml.install(build=False, create=True, autostart=True)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/storage.py",<br>
> line 538, in install<br>
> raise RuntimeError(_("Could not define storage pool: %s") % str(e))<br>
> RuntimeError: Could not define storage pool: operation failed: pool<br>
> 'vmx_input' is already defined with uuid<br>
> 00000000-1111-2222-3333-<wbr>444444444444<br>
> ==============================<wbr>==============================<wbr>==========<br>
> FAIL: testCLI0228virt_convert_ovf_<wbr>compare (tests.clitest.CLITests)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 1093, in <lambda><br>
> return lambda s: cmdtemplate(s, cmd)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 1092, in cmdtemplate<br>
> _cmdobj.run(self)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 270, in run<br>
> tests.fail(err)<br>
> AssertionError: ./virt-convert --connect<br>
> __virtinst_test__test:////<wbr>home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>testdriver.xml,predictable,<wbr>qemu,domcaps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64-<wbr>domcaps.xml,caps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64.<wbr>xml<br>
> --dry<br>
> /home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>virtconv-files/ovf_input/<wbr>test1.ovf<br>
> --disk-format none --destination /tmp --print-xml<br>
> Expected command to pass, but it didn't.<br>
> Command was: ./virt-convert --connect<br>
> __virtinst_test__test:////<wbr>home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>testdriver.xml,predictable,<wbr>qemu,domcaps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64-<wbr>domcaps.xml,caps=/home/<wbr>actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>capabilities-xml/kvm-x86_64.<wbr>xml<br>
> --dry<br>
> /home/actionmystique/src/Virt-<wbr>manager/virt-manager-build/<wbr>virt-manager-1.4.3-1/tests/<wbr>virtconv-files/ovf_input/<wbr>test1.ovf<br>
> --disk-format none --destination /tmp --print-xml<br>
> Error code : -1<br>
> Output was:<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/clitest.py",<br>
> line 161, in _launch_command<br>
> ret = virtconvert.main(conn=conn)<br>
> File "virt-convert", line 111, in main<br>
> destdir=options.destination, dry=options.dry)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtconv/formats.py",<br>
> line 337, in convert_disks<br>
> disk.path = newpath<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/devicedisk.py",<br>
> line 510, in _set_path<br>
> (vol_object, parent_pool) = diskbackend.manage_path(self.<wbr>conn,<br>
> newpath)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/diskbackend.py",<br>
> line 177, in manage_path<br>
> pool = poolxml.install(build=False, create=True, autostart=True)<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>virtinst/storage.py",<br>
> line 538, in install<br>
> raise RuntimeError(_("Could not define storage pool: %s") % str(e))<br>
> RuntimeError: Could not define storage pool: operation failed: pool<br>
> 'ovf_input' is already defined with uuid<br>
> 00000000-1111-2222-3333-<wbr>444444444444<br>
> ==============================<wbr>==============================<wbr>==========<br>
> FAIL: testCheckProps (tests.checkprops.<wbr>CheckPropsTest)<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/checkprops.py",<br>
> line 33, in testCheckProps<br>
> self.fail(msg)<br>
> AssertionError: Traceback (most recent call last):<br>
> File<br>
> "/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1/<wbr>tests/checkprops.py",<br>
> line 25, in testCheckProps<br>
> self.assertEqual([], fail)<br>
> File "/usr/lib/python2.7/unittest/<wbr>case.py", line 513, in assertEqual<br>
> assertion_func(first, second, msg=msg)<br>
> File "/usr/lib/python2.7/unittest/<wbr>case.py", line 743, in assertListEqual<br>
> self.assertSequenceEqual(<wbr>list1, list2, msg, seq_type=list)<br>
> File "/usr/lib/python2.7/unittest/<wbr>case.py", line 725, in<br>
> assertSequenceEqual<br>
> self.fail(msg)<br>
> File "/usr/lib/python2.7/unittest/<wbr>case.py", line 410, in fail<br>
> raise self.failureException(msg)<br>
> AssertionError: Lists differ: [] != [<XMLProperty ./source/@master...<br>
> Second list contains 13 additional elements.<br>
> First extra element 0:<br>
> <XMLProperty ./source/@master 140505752329448><br>
> - []<br>
> + [<XMLProperty ./source/@master 140505752329448>,<br>
> + <XMLProperty ./source/@slave 140505752329552>,<br>
> + <XMLProperty ./log/@file 140505752372008>,<br>
> + <XMLProperty ./log/@append 140505752372112>,<br>
> + <XMLProperty ./@socket 140505750518912>,<br>
> + <XMLProperty ./parameters/@profileid 140505750662688>,<br>
> + <XMLProperty ./parameters/@interfaceid 140505750662792>,<br>
> + <XMLProperty ./@trustGuestRxFilters 140505750692232>,<br>
> + <XMLProperty ./source/@type 140505750692440>,<br>
> + <XMLProperty ./source/@path 140505750692544>,<br>
> + <XMLProperty ./link/@state 140505750693168>,<br>
> + <XMLProperty ./rom/@bar 140505750693480>,<br>
> + <XMLProperty ./rom/@file 140505750693584>]<br>
> This means that there are XML properties that are<br>
> untested in the test suite. This could be caused<br>
> by a previous test suite failure, or if you added<br>
> a new property and didn't extend the test suite.<br>
> Look into extending clitest.py and/or xmlparse.py.<br>
> ------------------------------<wbr>------------------------------<wbr>----------<br>
> Ran 437 tests in 34.657s<br>
> FAILED (failures=5, errors=9)<br>
> make[1]: *** [debian/rules:11: override_dh_auto_test] Error 1<br>
> make[1]: Leaving directory<br>
> '/home/actionmystique/src/<wbr>Virt-manager/virt-manager-<wbr>build/virt-manager-1.4.3-1'<br>
> make: *** [debian/rules:4: build] Error 2<br>
> dpkg-buildpackage: error: debian/rules build subprocess returned exit<br>
> status 2<br>
> debuild: fatal error at line 1152:<br>
> dpkg-buildpackage -rfakeroot -us -uc -ui -i -I --build=binary<br>
> --build=binary -j1 -mJean-Christophe Manciot<br>
> <[15]<a href="mailto:manciot.jeanchristophe@gmail.com">manciot.jeanchristophe@<wbr>gmail.com</a>> failed<br>
> --<br>
> Jean-Christophe<br>
><br>
> References<br>
><br>
> Visible links<br>
> 1. <a href="http://ceph-mon-1.example.com/" rel="noreferrer" target="_blank">http://ceph-mon-1.example.com/</a><br>
> 2. <a href="http://ceph-mon-3.example.com/" rel="noreferrer" target="_blank">http://ceph-mon-3.example.com/</a><br>
> 3. <a href="http://example.com/" rel="noreferrer" target="_blank">http://example.com/</a><br>
> 4. <a href="http://192.168.1.100/test-volume/some/dir/test-gluster.qcow2" rel="noreferrer" target="_blank">http://192.168.1.100/test-<wbr>volume/some/dir/test-gluster.<wbr>qcow2</a><br>
> 5. <a href="http://0.0.0.0:1234/" rel="noreferrer" target="_blank">http://0.0.0.0:1234/</a><br>
> 6. <a href="http://127.0.0.1:1234/" rel="noreferrer" target="_blank">http://127.0.0.1:1234/</a><br>
> 7. <a href="http://127.0.0.1:10000/" rel="noreferrer" target="_blank">http://127.0.0.1:10000/</a><br>
> 8. <a href="http://127.0.0.1:4002/" rel="noreferrer" target="_blank">http://127.0.0.1:4002/</a><br>
> 9. <a href="http://example.com/" rel="noreferrer" target="_blank">http://example.com/</a><br>
> 10. <a href="http://192.168.1.100/test-volume/some/dir/test-gluster.qcow2" rel="noreferrer" target="_blank">http://192.168.1.100/test-<wbr>volume/some/dir/test-gluster.<wbr>qcow2</a><br>
> 11. <a href="http://0.0.0.0:1234/" rel="noreferrer" target="_blank">http://0.0.0.0:1234/</a><br>
> 12. <a href="http://127.0.0.1:1234/" rel="noreferrer" target="_blank">http://127.0.0.1:1234/</a><br>
> 13. <a href="http://127.0.0.1:10000/" rel="noreferrer" target="_blank">http://127.0.0.1:10000/</a><br>
> 14. <a href="http://127.0.0.1:4002/" rel="noreferrer" target="_blank">http://127.0.0.1:4002/</a><br>
> 15. mailto:<a href="mailto:manciot.jeanchristophe@gmail.com">manciot.jeanchristophe@<wbr>gmail.com</a><br>
<br>
> ______________________________<wbr>_________________<br>
> Pkg-libvirt-maintainers mailing list<br>
> <a href="mailto:Pkg-libvirt-maintainers@lists.alioth.debian.org">Pkg-libvirt-maintainers@lists.<wbr>alioth.debian.org</a><br>
> <a href="http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-libvirt-maintainers" rel="noreferrer" target="_blank">http://lists.alioth.debian.<wbr>org/cgi-bin/mailman/listinfo/<wbr>pkg-libvirt-maintainers</a><br>
<br>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Jean-Christophe</div>
</div>