All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
@ 2016-08-17 12:08 Kevin Zhao
  2016-08-17 16:13 ` Andrew Jones
  0 siblings, 1 reply; 18+ messages in thread
From: Kevin Zhao @ 2016-08-17 12:08 UTC (permalink / raw)
  To: QEMU Developers, qemu-arm; +Cc: Peter Maydell, Thomas Hanson, Gema Gomez-Solano

Hi all,
     Now I'm investigating net device hot plug and disk hotplug for
AArch64. For virtio , the default address is virtio-mmio. After Libvirt
1.3.5, user can explicitly specify the address-type to pci and so libvirt
will pass the virtio-pci parameters to the Qemu.
     Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
Libvirt version is 1.3.5.
     For net-device, I change the address-type to pci, and libvirt pass the
command below:
     -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:25,bus=pci.2,addr=0x1

     After booting, the eth0 device disappear(eth0 occur when the address
is virtio-mmio),
but I can find another net-device enp2s1, also it can't work for dhcp:
Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
device
I'm not sure whether it worked.

     For disk device,* when I change the address-type to pci, the whole
qemu command is :*
https://paste.fedoraproject.org/409553/,  but the VM can not boot
successfully. Does Qemu not support device disk of virtio-pci in AArch64
just as it in X86_64?
     Thanks~Since I am not very familiar with Qemu, really looking forward
to your response.

Best Regards,
Kevin Zhao

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-17 12:08 [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device? Kevin Zhao
@ 2016-08-17 16:13 ` Andrew Jones
  2016-08-17 16:41   ` Andrea Bolognani
                     ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Andrew Jones @ 2016-08-17 16:13 UTC (permalink / raw)
  To: Kevin Zhao
  Cc: QEMU Developers, qemu-arm, Thomas Hanson, Peter Maydell,
	Gema Gomez-Solano, Marcel Apfelbaum, Andrea Bolognani,
	Laine Stump

On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
> Hi all,
>      Now I'm investigating net device hot plug and disk hotplug for
> AArch64. For virtio , the default address is virtio-mmio. After Libvirt
> 1.3.5, user can explicitly specify the address-type to pci and so libvirt
> will pass the virtio-pci parameters to the Qemu.
>      Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
> Libvirt version is 1.3.5.
>      For net-device, I change the address-type to pci, and libvirt pass the
> command below:
>      -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:25,bus=pci.2,addr=0x1
> 
>      After booting, the eth0 device disappear(eth0 occur when the address
> is virtio-mmio),
> but I can find another net-device enp2s1, also it can't work for dhcp:
> Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
> device
> I'm not sure whether it worked.
> 
>      For disk device,* when I change the address-type to pci, the whole
> qemu command is :*
> https://paste.fedoraproject.org/409553/,  but the VM can not boot
> successfully. Does Qemu not support device disk of virtio-pci in AArch64
> just as it in X86_64?
>      Thanks~Since I am not very familiar with Qemu, really looking forward
> to your response.
> 
> Best Regards,
> Kevin Zhao

libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
go in order to configure a base/standard mach-virt PCIe machine.

1) If we want to support both PCIe devices and PCI, then things are messy.
   Currently we propose dropping PCI support. mach-virt pretty much
   exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
2) root complex ports, switches (upstream/downstream ports) are currently
   based on Intel parts. Marcel is thinking about creating generic models.
3) libvirt needs to learn how to plug everything together, in proper PCIe
   fashion, leaving holes for hotplug.
4) Probably more... I forget all the different issues we discovered when
   we started playing with this a few months ago.

The good news is that x86 folk want all the same things for the q35 model.
mach-virt enthusiasts like us get to ride along pretty much for free.

So, using virtio-pci with mach-virt and libvirt isn't possible right now,
not without manual changes to the XML. It might be nice to document how to
manually convert a guest, so developers who want to use virtio-pci don't
have to abandon libvirt. I'd have to look into that, or ask one of our
libvirt friends to help. Certainly the instructions would be for latest
libvirt though.

Finally, FWIW, with a guest kernel of 4.6.4-301.fc24.aarch64. The
following qemu command line works for me.
(notice the use of PCIe), and my network interface gets labeled enp0s1.

$QEMU -machine virt-2.6,accel=kvm -cpu host \
 -m 1024 -smp 1 -nographic \
 -bios /usr/share/AAVMF/AAVMF_CODE.fd \
 -device ioh3420,bus=pcie.0,id=pcie.1,port=1,chassis=1 \
 -device ioh3420,bus=pcie.0,id=pcie.2,port=2,chassis=2 \
 -device virtio-scsi-pci,disable-modern=off,disable-legacy=on,bus=pcie.1,addr=00.0,id=scsi0 \
 -drive file=/home/drjones/.local/libvirt/images/fedora.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 \
 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 \
 -netdev user,id=hostnet0 \
 -device virtio-net-pci,disable-modern=off,disable-legacy=on,bus=pcie.2,addr=00.0,netdev=hostnet0,id=net0

I prefer always using virtio-scsi for the disk, but a similar command
line can be used for a virtio-blk-pci disk.

Thanks,
drew

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-17 16:13 ` Andrew Jones
@ 2016-08-17 16:41   ` Andrea Bolognani
  2016-08-18  6:38     ` Andrew Jones
  2016-08-17 17:00   ` Laine Stump
  2016-08-18 12:30   ` Kevin Zhao
  2 siblings, 1 reply; 18+ messages in thread
From: Andrea Bolognani @ 2016-08-17 16:41 UTC (permalink / raw)
  To: Andrew Jones, Kevin Zhao
  Cc: QEMU Developers, qemu-arm, Thomas Hanson, Peter Maydell,
	Gema Gomez-Solano, Marcel Apfelbaum, Laine Stump

On Wed, 2016-08-17 at 18:13 +0200, Andrew Jones wrote:
> On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
> > 
> > Hi all,
> >      Now I'm investigating net device hot plug and disk hotplug for
> > AArch64. For virtio , the default address is virtio-mmio. After Libvirt
> > 1.3.5, user can explicitly specify the address-type to pci and so libvirt
> > will pass the virtio-pci parameters to the Qemu.
> >      Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
> > Libvirt version is 1.3.5.
> >      For net-device, I change the address-type to pci, and libvirt pass the
> > command below:
> >      -device
> > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:25,bus=pci.2,addr=0x1
> > 
> >      After booting, the eth0 device disappear(eth0 occur when the address
> > is virtio-mmio),
> > but I can find another net-device enp2s1, also it can't work for dhcp:
> > Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
> > device
> > I'm not sure whether it worked.
> > 
> >      For disk device,* when I change the address-type to pci, the whole
> > qemu command is :*
> > https://paste.fedoraproject.org/409553/,  but the VM can not boot
> > successfully. Does Qemu not support device disk of virtio-pci in AArch64
> > just as it in X86_64?
> >      Thanks~Since I am not very familiar with Qemu, really looking forward
> > to your response.
> > 
> > Best Regards,
> > Kevin Zhao
> 
> libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
> the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
> host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
> go in order to configure a base/standard mach-virt PCIe machine.

Debian 8, the guest OS Kevin is trying to boot, is even older,
and in particular it doesn't have any virtio-pci support.

By the way, the same issue was raised on the libvirt list as
well

  https://www.redhat.com/archives/libvir-list/2016-August/msg00854.html

and there's some more information there.

> 1) If we want to support both PCIe devices and PCI, then things are messy.
>    Currently we propose dropping PCI support. mach-virt pretty much
>    exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
> 2) root complex ports, switches (upstream/downstream ports) are currently
>    based on Intel parts. Marcel is thinking about creating generic models.

Huge +1 from me! Way to go, Marcel! :)

> 3) libvirt needs to learn how to plug everything together, in proper PCIe
>    fashion, leaving holes for hotplug.

Work on this front is ongoing on the libvirt front as we speak.

> 4) Probably more... I forget all the different issues we discovered when
>    we started playing with this a few months ago.
> 
> The good news is that x86 folk want all the same things for the q35 model.
> mach-virt enthusiasts like us get to ride along pretty much for free.
> 
> So, using virtio-pci with mach-virt and libvirt isn't possible right now,
> not without manual changes to the XML. It might be nice to document how to
> manually convert a guest, so developers who want to use virtio-pci don't
> have to abandon libvirt. I'd have to look into that, or ask one of our
> libvirt friends to help. Certainly the instructions would be for latest
> libvirt though.

Things are very much in flux, though, so I'm not entirely sure
putting out any sort of official document would be a good idea
right now. We'll definitely help eg. through the mailing lists
and similar channels, but committing any configuration to a
more static media seems premature.

> Finally, FWIW, with a guest kernel of 4.6.4-301.fc24.aarch64. The
> following qemu command line works for me.
> (notice the use of PCIe), and my network interface gets labeled enp0s1.
> 
> $QEMU -machine virt-2.6,accel=kvm -cpu host \
>  -m 1024 -smp 1 -nographic \
>  -bios /usr/share/AAVMF/AAVMF_CODE.fd \
>  -device ioh3420,bus=pcie.0,id=pcie.1,port=1,chassis=1 \
>  -device ioh3420,bus=pcie.0,id=pcie.2,port=2,chassis=2 \
>  -device virtio-scsi-pci,disable-modern=off,disable-legacy=on,bus=pcie.1,addr=00.0,id=scsi0 \
>  -drive file=/home/drjones/.local/libvirt/images/fedora.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 \
>  -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 \
>  -netdev user,id=hostnet0 \
>  -device virtio-net-pci,disable-modern=off,disable-legacy=on,bus=pcie.2,addr=00.0,netdev=hostnet0,id=net0
> 
> I prefer always using virtio-scsi for the disk, but a similar command
> line can be used for a virtio-blk-pci disk.

Does the same command line work if you don't specify any of
the disable-* options?

I'm asking because I tried running a Fedora 24 guest through
libvirt, which doesn't support those options yet, and I get

  virtio_blk virtio2: virtio: device uses modern interface but
                              does not have VIRTIO_F_VERSION_1
  virtio_blk: probe of virtio2 failed with error -22

Isn't the default for 2.6 disable-modern=off,
disable-legacy=off? Or was that 2.7? I tried both anyway ;)

-- 
Andrea Bolognani / Red Hat / Virtualization

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-17 16:13 ` Andrew Jones
  2016-08-17 16:41   ` Andrea Bolognani
@ 2016-08-17 17:00   ` Laine Stump
  2016-08-18  7:41     ` Andrew Jones
                       ` (2 more replies)
  2016-08-18 12:30   ` Kevin Zhao
  2 siblings, 3 replies; 18+ messages in thread
From: Laine Stump @ 2016-08-17 17:00 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Andrew Jones, Kevin Zhao, qemu-arm, Thomas Hanson, Peter Maydell,
	Gema Gomez-Solano, Marcel Apfelbaum, Andrea Bolognani

On 08/17/2016 12:13 PM, Andrew Jones wrote:
> On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
>> Hi all,
>>       Now I'm investigating net device hot plug and disk hotplug for
>> AArch64. For virtio , the default address is virtio-mmio. After Libvirt
>> 1.3.5, user can explicitly specify the address-type to pci and so libvirt
>> will pass the virtio-pci parameters to the Qemu.
>>       Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
>> Libvirt version is 1.3.5.
>>       For net-device, I change the address-type to pci, and libvirt pass the
>> command below:
>>       -device
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:25,bus=pci.2,addr=0x1
>>
>>       After booting, the eth0 device disappear(eth0 occur when the address
>> is virtio-mmio),
>> but I can find another net-device enp2s1, also it can't work for dhcp:
>> Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
>> device
>> I'm not sure whether it worked.
>>
>>       For disk device,* when I change the address-type to pci, the whole
>> qemu command is :*
>> https://paste.fedoraproject.org/409553/,  but the VM can not boot
>> successfully. Does Qemu not support device disk of virtio-pci in AArch64
>> just as it in X86_64?
>>       Thanks~Since I am not very familiar with Qemu, really looking forward
>> to your response.
>>
>> Best Regards,
>> Kevin Zhao
> libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
> the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
> host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
> go in order to configure a base/standard mach-virt PCIe machine.

Well, you can do it now, but you have to manually assign the PCI 
addresses of devices (and if you want hotplug you need to live with 
Intel/TI-specific PCIe controllers).


>
> 1) If we want to support both PCIe devices and PCI, then things are messy.
>     Currently we propose dropping PCI support. mach-virt pretty much
>     exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)

I have a libvirt patch just about ACKed for pushing upstream that will 
automatically assign virtio-pci devices to a PCIe slot (if the qemu 
binary supports virtio-1.0):

https://www.redhat.com/archives/libvir-list/2016-August/msg00852.html

Separate patches do the same for the e1000e emulated network device 
(which you probably don't care about) and the nec-usb-xhci (USB3) 
controller (more useful):

https://www.redhat.com/archives/libvir-list/2016-August/msg00732.html

Once these are in place, the only type of device of any consequence that 
I can see still having no PCIe alternative is audio (even though only 
the virgl video device is PCIe, libvirt has always assigned the primary 
video to slot 1 on pcie-root anyway (although you shouldn't put a legacy 
PCI device on a pcie-root-port or pcie-switch-downstream-port, it is 
acceptable to plug it directly into pcie-root (as long as you know you 
won't need to hotplug it).

> 2) root complex ports, switches (upstream/downstream ports) are currently
>     based on Intel parts. Marcel is thinking about creating generic models.

I say this every time it comes up, so just to be consistent: +1 :-)

> 3) libvirt needs to learn how to plug everything together, in proper PCIe
>     fashion, leaving holes for hotplug.

See above about virtio, although that doesn't cover the whole story. The 
other part (which I'm working on right now) is that libvirt needs to 
automatically add pcie-root-port, pcie-switch-upstream-port, and 
pcie-switch-downstream-port devices as necessary. With the patches I 
mentioned above, you still have to manually add enough pcie-*-port 
controllers to the config, and then libvirt will plug the PCIe devices 
into the right place. This is simple enough to do, but it does require 
intervention.

As far as leaving holes for hotplug, there's actually still a bit of an 
open question there - with machinetypes that use only legacy-PCI, *all* 
slots are hotpluggable, and they're added 31 at a time, so there was 
never any question about which slots were hotpluggable, and it would be 
very rare to end up with a configuration that had 0 free slots available 
for hotplug (actually libvirt would always make sure there was at least 
one, but in practice there would be many more open slots). With 
PCIe-capable machinetypes that is changed, since the root complex 
(pcie-root) doesn't support hotplug, and new slots are added 1 at a time 
(pcie-*-port) rather than 31 at a time. This means you have to really go 
out of your way if you want open slots for hotplug (and even if you want 
devices in the machine at boot time to be hot-unpluggable).

I'm still not sure just how far we need to go in this regard.  We've 
already decided that, unless manually set to an address on pcie-root by 
the user/management application, all PCI devices will be auto-assigned 
to a slot that supports hotplug. What I'm not sure about is whether we 
should always auto-add an extra pcie-*-root to be sure a device can be 
hotplugged, or if we should admit that 1 available slot isn't good 
enough for all situations, so we should instead just leave it up to the 
user/management to manually add extra ports if they think they'll want 
to hotplug something later.

> 4) Probably more... I forget all the different issues we discovered when
>     we started playing with this a few months ago.
>
> The good news is that x86 folk want all the same things for the q35 model.
> mach-virt enthusiasts like us get to ride along pretty much for free.
>
> So, using virtio-pci with mach-virt and libvirt isn't possible right now,
> not without manual changes to the XML. It might be nice to document how to
> manually convert a guest, so developers who want to use virtio-pci don't
> have to abandon libvirt. I'd have to look into that, or ask one of our
> libvirt friends to help. Certainly the instructions would be for latest
> libvirt though.
>
> Finally, FWIW, with a guest kernel of 4.6.4-301.fc24.aarch64. The
> following qemu command line works for me.
> (notice the use of PCIe), and my network interface gets labeled enp0s1.
>
> $QEMU -machine virt-2.6,accel=kvm -cpu host \
>   -m 1024 -smp 1 -nographic \
>   -bios /usr/share/AAVMF/AAVMF_CODE.fd \
>   -device ioh3420,bus=pcie.0,id=pcie.1,port=1,chassis=1 \
>   -device ioh3420,bus=pcie.0,id=pcie.2,port=2,chassis=2 \
>   -device virtio-scsi-pci,disable-modern=off,disable-legacy=on,bus=pcie.1,addr=00.0,id=scsi0 \
>   -drive file=/home/drjones/.local/libvirt/images/fedora.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 \
>   -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 \
>   -netdev user,id=hostnet0 \
>   -device virtio-net-pci,disable-modern=off,disable-legacy=on,bus=pcie.2,addr=00.0,netdev=hostnet0,id=net0
>
> I prefer always using virtio-scsi for the disk, but a similar command
> line can be used for a virtio-blk-pci disk.
>
> Thanks,
> drew

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-17 16:41   ` Andrea Bolognani
@ 2016-08-18  6:38     ` Andrew Jones
  2016-08-19 15:43       ` Andrea Bolognani
  0 siblings, 1 reply; 18+ messages in thread
From: Andrew Jones @ 2016-08-18  6:38 UTC (permalink / raw)
  To: Andrea Bolognani
  Cc: Kevin Zhao, Peter Maydell, Marcel Apfelbaum, Gema Gomez-Solano,
	QEMU Developers, Thomas Hanson, qemu-arm, Laine Stump

On Wed, Aug 17, 2016 at 06:41:33PM +0200, Andrea Bolognani wrote:
> On Wed, 2016-08-17 at 18:13 +0200, Andrew Jones wrote:
> > On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
> > > 
> > > Hi all,
> > >      Now I'm investigating net device hot plug and disk hotplug for
> > > AArch64. For virtio , the default address is virtio-mmio. After Libvirt
> > > 1.3.5, user can explicitly specify the address-type to pci and so libvirt
> > > will pass the virtio-pci parameters to the Qemu.
> > >      Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
> > > Libvirt version is 1.3.5.
> > >      For net-device, I change the address-type to pci, and libvirt pass the
> > > command below:
> > >      -device
> > > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:25,bus=pci.2,addr=0x1
> > > 
> > >      After booting, the eth0 device disappear(eth0 occur when the address
> > > is virtio-mmio),
> > > but I can find another net-device enp2s1, also it can't work for dhcp:
> > > Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
> > > device
> > > I'm not sure whether it worked.
> > > 
> > >      For disk device,* when I change the address-type to pci, the whole
> > > qemu command is :*
> > > https://paste.fedoraproject.org/409553/,  but the VM can not boot
> > > successfully. Does Qemu not support device disk of virtio-pci in AArch64
> > > just as it in X86_64?
> > >      Thanks~Since I am not very familiar with Qemu, really looking forward
> > > to your response.
> > > 
> > > Best Regards,
> > > Kevin Zhao
> > 
> > libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
> > the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
> > host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
> > go in order to configure a base/standard mach-virt PCIe machine.
> 
> Debian 8, the guest OS Kevin is trying to boot, is even older,
> and in particular it doesn't have any virtio-pci support.
> 
> By the way, the same issue was raised on the libvirt list as
> well
> 
>   https://www.redhat.com/archives/libvir-list/2016-August/msg00854.html
> 
> and there's some more information there.
> 
> > 1) If we want to support both PCIe devices and PCI, then things are messy.
> >    Currently we propose dropping PCI support. mach-virt pretty much
> >    exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
> > 2) root complex ports, switches (upstream/downstream ports) are currently
> >    based on Intel parts. Marcel is thinking about creating generic models.
> 
> Huge +1 from me! Way to go, Marcel! :)
> 
> > 3) libvirt needs to learn how to plug everything together, in proper PCIe
> >    fashion, leaving holes for hotplug.
> 
> Work on this front is ongoing on the libvirt front as we speak.
> 
> > 4) Probably more... I forget all the different issues we discovered when
> >    we started playing with this a few months ago.
> > 
> > The good news is that x86 folk want all the same things for the q35 model.
> > mach-virt enthusiasts like us get to ride along pretty much for free.
> > 
> > So, using virtio-pci with mach-virt and libvirt isn't possible right now,
> > not without manual changes to the XML. It might be nice to document how to
> > manually convert a guest, so developers who want to use virtio-pci don't
> > have to abandon libvirt. I'd have to look into that, or ask one of our
> > libvirt friends to help. Certainly the instructions would be for latest
> > libvirt though.
> 
> Things are very much in flux, though, so I'm not entirely sure
> putting out any sort of official document would be a good idea
> right now. We'll definitely help eg. through the mailing lists
> and similar channels, but committing any configuration to a
> more static media seems premature.

Understood, and based on Laine's response, things are looking just
around the corner now anyway, so forget I asked :-)

> 
> > Finally, FWIW, with a guest kernel of 4.6.4-301.fc24.aarch64. The
> > following qemu command line works for me.
> > (notice the use of PCIe), and my network interface gets labeled enp0s1.
> > 
> > $QEMU -machine virt-2.6,accel=kvm -cpu host \
> >  -m 1024 -smp 1 -nographic \
> >  -bios /usr/share/AAVMF/AAVMF_CODE.fd \
> >  -device ioh3420,bus=pcie.0,id=pcie.1,port=1,chassis=1 \
> >  -device ioh3420,bus=pcie.0,id=pcie.2,port=2,chassis=2 \
> >  -device virtio-scsi-pci,disable-modern=off,disable-legacy=on,bus=pcie.1,addr=00.0,id=scsi0 \
> >  -drive file=/home/drjones/.local/libvirt/images/fedora.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 \
> >  -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 \
> >  -netdev user,id=hostnet0 \
> >  -device virtio-net-pci,disable-modern=off,disable-legacy=on,bus=pcie.2,addr=00.0,netdev=hostnet0,id=net0
> > 
> > I prefer always using virtio-scsi for the disk, but a similar command
> > line can be used for a virtio-blk-pci disk.
> 
> Does the same command line work if you don't specify any of
> the disable-* options?
> 
> I'm asking because I tried running a Fedora 24 guest through
> libvirt, which doesn't support those options yet, and I get
> 
>   virtio_blk virtio2: virtio: device uses modern interface but
>                               does not have VIRTIO_F_VERSION_1
>   virtio_blk: probe of virtio2 failed with error -22

Doesn't work for me either. I can only boot with disable-modern=off,
disable-legacy=on (at least when building my config the way I try to
build it...) I presume that's a guest kernel issue.

> 
> Isn't the default for 2.6 disable-modern=off,
> disable-legacy=off? Or was that 2.7? I tried both anyway ;)

Dunno. With the command line getting longer all the time, I just
have a script that generates one that works for me, and haven't
worried much about the defaults...

Thanks,
drew

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-17 17:00   ` Laine Stump
@ 2016-08-18  7:41     ` Andrew Jones
  2016-08-18 21:11       ` Laine Stump
  2016-08-18 12:10     ` Marcel Apfelbaum
  2016-08-18 12:43     ` Kevin Zhao
  2 siblings, 1 reply; 18+ messages in thread
From: Andrew Jones @ 2016-08-18  7:41 UTC (permalink / raw)
  To: Laine Stump
  Cc: QEMU Developers, Peter Maydell, Kevin Zhao, Gema Gomez-Solano,
	Marcel Apfelbaum, Andrea Bolognani, Thomas Hanson, qemu-arm

On Wed, Aug 17, 2016 at 01:00:05PM -0400, Laine Stump wrote:
> On 08/17/2016 12:13 PM, Andrew Jones wrote:
> > On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
> > > Hi all,
> > >       Now I'm investigating net device hot plug and disk hotplug for
> > > AArch64. For virtio , the default address is virtio-mmio. After Libvirt
> > > 1.3.5, user can explicitly specify the address-type to pci and so libvirt
> > > will pass the virtio-pci parameters to the Qemu.
> > >       Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
> > > Libvirt version is 1.3.5.
> > >       For net-device, I change the address-type to pci, and libvirt pass the
> > > command below:
> > >       -device
> > > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:25,bus=pci.2,addr=0x1
> > > 
> > >       After booting, the eth0 device disappear(eth0 occur when the address
> > > is virtio-mmio),
> > > but I can find another net-device enp2s1, also it can't work for dhcp:
> > > Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
> > > device
> > > I'm not sure whether it worked.
> > > 
> > >       For disk device,* when I change the address-type to pci, the whole
> > > qemu command is :*
> > > https://paste.fedoraproject.org/409553/,  but the VM can not boot
> > > successfully. Does Qemu not support device disk of virtio-pci in AArch64
> > > just as it in X86_64?
> > >       Thanks~Since I am not very familiar with Qemu, really looking forward
> > > to your response.
> > > 
> > > Best Regards,
> > > Kevin Zhao
> > libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
> > the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
> > host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
> > go in order to configure a base/standard mach-virt PCIe machine.
> 
> Well, you can do it now, but you have to manually assign the PCI addresses
> of devices (and if you want hotplug you need to live with Intel/TI-specific
> PCIe controllers).
> 
> 
> > 
> > 1) If we want to support both PCIe devices and PCI, then things are messy.
> >     Currently we propose dropping PCI support. mach-virt pretty much
> >     exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
> 
> I have a libvirt patch just about ACKed for pushing upstream that will
> automatically assign virtio-pci devices to a PCIe slot (if the qemu binary
> supports virtio-1.0):
> 
> https://www.redhat.com/archives/libvir-list/2016-August/msg00852.html
> 
> Separate patches do the same for the e1000e emulated network device (which
> you probably don't care about) and the nec-usb-xhci (USB3) controller (more
> useful):
> 
> https://www.redhat.com/archives/libvir-list/2016-August/msg00732.html
> 

Thanks for the update Laine. This sounds great to me. With those patches
we can switch from virtio-mmio to virtio-pci easily, even if we're still
missing hotplug a bit longer. What limit do we have for the number of
devices, when we don't have any switches? I think I experimented once and
found it to be 7.

> Once these are in place, the only type of device of any consequence that I
> can see still having no PCIe alternative is audio (even though only the
> virgl video device is PCIe, libvirt has always assigned the primary video to
> slot 1 on pcie-root anyway (although you shouldn't put a legacy PCI device
> on a pcie-root-port or pcie-switch-downstream-port, it is acceptable to plug
> it directly into pcie-root (as long as you know you won't need to hotplug
> it).
> 
> > 2) root complex ports, switches (upstream/downstream ports) are currently
> >     based on Intel parts. Marcel is thinking about creating generic models.
> 
> I say this every time it comes up, so just to be consistent: +1 :-)
> 
> > 3) libvirt needs to learn how to plug everything together, in proper PCIe
> >     fashion, leaving holes for hotplug.
> 
> See above about virtio, although that doesn't cover the whole story. The
> other part (which I'm working on right now) is that libvirt needs to
> automatically add pcie-root-port, pcie-switch-upstream-port, and
> pcie-switch-downstream-port devices as necessary. With the patches I
> mentioned above, you still have to manually add enough pcie-*-port
> controllers to the config, and then libvirt will plug the PCIe devices into
> the right place. This is simple enough to do, but it does require
> intervention.

OK, so we want this to support hotplug and eventually chain switches,
bumping our device limit up higher and higher. To what? I'm not sure,
I guess we're still limited by address space.

> 
> As far as leaving holes for hotplug, there's actually still a bit of an open
> question there - with machinetypes that use only legacy-PCI, *all* slots are
> hotpluggable, and they're added 31 at a time, so there was never any
> question about which slots were hotpluggable, and it would be very rare to
> end up with a configuration that had 0 free slots available for hotplug
> (actually libvirt would always make sure there was at least one, but in
> practice there would be many more open slots). With PCIe-capable
> machinetypes that is changed, since the root complex (pcie-root) doesn't
> support hotplug, and new slots are added 1 at a time (pcie-*-port) rather
> than 31 at a time. This means you have to really go out of your way if you
> want open slots for hotplug (and even if you want devices in the machine at
> boot time to be hot-unpluggable).
> 
> I'm still not sure just how far we need to go in this regard.  We've already
> decided that, unless manually set to an address on pcie-root by the
> user/management application, all PCI devices will be auto-assigned to a slot
> that supports hotplug. What I'm not sure about is whether we should always
> auto-add an extra pcie-*-root to be sure a device can be hotplugged, or if
> we should admit that 1 available slot isn't good enough for all situations,
> so we should instead just leave it up to the user/management to manually add
> extra ports if they think they'll want to hotplug something later.

Hmm... Maybe the tools can make this easier by offering an option to
provide N extra ports.

Hmm2... I think I agree that we don't need to worry too much about
providing free ports for hotplug (maybe just one for fun). With
virtio-scsi we can plug disks already. If we want to provide multiple
virtio-net devices for the price of one port, we can enable mutlifunction.
And, iirc, there's work to get ARI functioning, allowing mutlifunction to
go nuts (assuming I understand its purpose correctly)

So maybe the default config just needs 3 ports?
 1 virtio-scsi with as many disks as requested
 1 virtio-net with as many functions as nics are requested
 1 extra port

Thanks,
drew

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-17 17:00   ` Laine Stump
  2016-08-18  7:41     ` Andrew Jones
@ 2016-08-18 12:10     ` Marcel Apfelbaum
  2016-08-18 21:20       ` Laine Stump
  2016-08-18 12:43     ` Kevin Zhao
  2 siblings, 1 reply; 18+ messages in thread
From: Marcel Apfelbaum @ 2016-08-18 12:10 UTC (permalink / raw)
  To: Laine Stump, QEMU Developers
  Cc: Andrew Jones, Kevin Zhao, qemu-arm, Thomas Hanson, Peter Maydell,
	Gema Gomez-Solano, Marcel Apfelbaum, Andrea Bolognani

On 08/17/2016 08:00 PM, Laine Stump wrote:
> On 08/17/2016 12:13 PM, Andrew Jones wrote:
>> On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
>>> Hi all,
[...]

Hi,

>>
>> 1) If we want to support both PCIe devices and PCI, then things are messy.
>>     Currently we propose dropping PCI support. mach-virt pretty much
>>     exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
>
> I have a libvirt patch just about ACKed for pushing upstream that will automatically assign virtio-pci devices to a PCIe slot (if the qemu binary supports virtio-1.0):
>
> https://www.redhat.com/archives/libvir-list/2016-August/msg00852.html
>
> Separate patches do the same for the e1000e emulated network device (which you probably don't care about) and the nec-usb-xhci (USB3) controller (more useful):
>
> https://www.redhat.com/archives/libvir-list/2016-August/msg00732.html
>
> Once these are in place, the only type of device of any consequence that I can see still having no PCIe alternative is audio (even though only the virgl video device is PCIe, libvirt has always
> assigned the primary video to slot 1 on pcie-root anyway (although you shouldn't put a legacy PCI device on a pcie-root-port or pcie-switch-downstream-port, it is acceptable to plug it directly into
> pcie-root (as long as you know you won't need to hotplug it).
>

I agree, please don't allow plugging PCI devices into PCIe ports (root ports/downstream ports). Use pcie.0 bus as you mentioned or start a pci "hierarchy" with a dmi-pci and pci-pci bridge.

>> 2) root complex ports, switches (upstream/downstream ports) are currently
>>     based on Intel parts. Marcel is thinking about creating generic models.
>
> I say this every time it comes up, so just to be consistent: +1 :-)
>

I plan do that for my "own selfish" reasons, one of them is to be able to specify
the IO/MEM range the guest firmware should assign to a PCIe port. But, of course,
I'll keep them neutral enough; I might need help with testing the mach-virt machines.

>> 3) libvirt needs to learn how to plug everything together, in proper PCIe
>>     fashion, leaving holes for hotplug.
>
> See above about virtio, although that doesn't cover the whole story. The other part (which I'm working on right now) is that libvirt needs to automatically add pcie-root-port,
> pcie-switch-upstream-port, and pcie-switch-downstream-port devices as necessary. With the patches I mentioned above, you still have to manually add enough pcie-*-port controllers to the config, and
> then libvirt will plug the PCIe devices into the right place. This is simple enough to do, but it does require intervention.
>
> As far as leaving holes for hotplug, there's actually still a bit of an open question there - with machinetypes that use only legacy-PCI, *all* slots are hotpluggable, and they're added 31 at a time,
> so there was never any question about which slots were hotpluggable, and it would be very rare to end up with a configuration that had 0 free slots available for hotplug (actually libvirt would always
> make sure there was at least one, but in practice there would be many more open slots). With PCIe-capable machinetypes that is changed, since the root complex (pcie-root) doesn't support hotplug, and
> new slots are added 1 at a time (pcie-*-port) rather than 31 at a time. This means you have to really go out of your way if you want open slots for hotplug (and even if you want devices in the machine
> at boot time to be hot-unpluggable).
>

I agree you need to take into account hotplug for Q35, by leaving some PCIe root ports available, but it shouldn't be too much trouble.

> I'm still not sure just how far we need to go in this regard.  We've already decided that, unless manually set to an address on pcie-root by the user/management application, all PCI devices will be
> auto-assigned to a slot that supports hotplug.

Good idea.

  What I'm not sure about is whether we should always auto-add an extra pcie-*-root to be sure a device can be hotplugged, or if we should admit that 1
> available slot isn't good enough for all situations, so we should instead just leave it up to the user/management to manually add extra ports if they think they'll want to hotplug something later.

Why not? Leaving 1 or 2 PCIe ports should be enough. On each port you can hotplug a switch with several downstream ports. You can continue nesting the switches up to depth of 6-7 I think.

Thanks,
Marcel

>
[...]
>>
>> Thanks,
>> drew
>
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-17 16:13 ` Andrew Jones
  2016-08-17 16:41   ` Andrea Bolognani
  2016-08-17 17:00   ` Laine Stump
@ 2016-08-18 12:30   ` Kevin Zhao
  2016-08-18 12:51     ` Kevin Zhao
  2 siblings, 1 reply; 18+ messages in thread
From: Kevin Zhao @ 2016-08-18 12:30 UTC (permalink / raw)
  To: Andrew Jones
  Cc: QEMU Developers, qemu-arm, Thomas Hanson, Peter Maydell,
	Gema Gomez-Solano, Marcel Apfelbaum, Andrea Bolognani,
	Laine Stump

Hi Jones:
   Thanks~It is great that Qemu has been working on that :-)

On 18 August 2016 at 00:13, Andrew Jones <drjones@redhat.com> wrote:

> On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
> > Hi all,
> >      Now I'm investigating net device hot plug and disk hotplug for
> > AArch64. For virtio , the default address is virtio-mmio. After Libvirt
> > 1.3.5, user can explicitly specify the address-type to pci and so libvirt
> > will pass the virtio-pci parameters to the Qemu.
> >      Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
> > Libvirt version is 1.3.5.
> >      For net-device, I change the address-type to pci, and libvirt pass
> the
> > command below:
> >      -device
> > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:
> 0d:25:25,bus=pci.2,addr=0x1
> >
> >      After booting, the eth0 device disappear(eth0 occur when the address
> > is virtio-mmio),
> > but I can find another net-device enp2s1, also it can't work for dhcp:
> > Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
> > device
> > I'm not sure whether it worked.
> >
> >      For disk device,* when I change the address-type to pci, the whole
> > qemu command is :*
> > https://paste.fedoraproject.org/409553/,  but the VM can not boot
> > successfully. Does Qemu not support device disk of virtio-pci in AArch64
> > just as it in X86_64?
> >      Thanks~Since I am not very familiar with Qemu, really looking
> forward
> > to your response.
> >
> > Best Regards,
> > Kevin Zhao
>
> libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
> the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
> host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
> go in order to configure a base/standard mach-virt PCIe machine.
>
>
Yeah, I am changing to libvirt 2.1.0, I find that I should use PCI by
manually add
the slots and bus to it.

1) If we want to support both PCIe devices and PCI, then things are messy.
>    Currently we propose dropping PCI support. mach-virt pretty much
>    exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
> 2) root complex ports, switches (upstream/downstream ports) are currently
>    based on Intel parts. Marcel is thinking about creating generic models.
> 3) libvirt needs to learn how to plug everything together, in proper PCIe
>    fashion, leaving holes for hotplug.
> 4) Probably more... I forget all the different issues we discovered when
>    we started playing with this a few months ago.
>
> The good news is that x86 folk want all the same things for the q35 model.
> mach-virt enthusiasts like us get to ride along pretty much for free.
>
> So, using virtio-pci with mach-virt and libvirt isn't possible right now,
> not without manual changes to the XML. It might be nice to document how to
> manually convert a guest, so developers who want to use virtio-pci don't
> have to abandon libvirt. I'd have to look into that, or ask one of our
> libvirt friends to help. Certainly the instructions would be for latest
> libvirt though.
>
>
As you said,  that means that I can use PCIe as the bus for disk and
net-device.
I will try using pcie in libvirt. I will try the newest version of libvirt.
Do I need to change <address type = 'pcie'> to enable it in AArch64 ?
The pcie bus will  be automatically assigned in libvirt ?


> Finally, FWIW, with a guest kernel of 4.6.4-301.fc24.aarch64. The
> following qemu command line works for me.
> (notice the use of PCIe), and my network interface gets labeled enp0s1.
>
> $QEMU -machine virt-2.6,accel=kvm -cpu host \
>  -m 1024 -smp 1 -nographic \
>  -bios /usr/share/AAVMF/AAVMF_CODE.fd \
>  -device ioh3420,bus=pcie.0,id=pcie.1,port=1,chassis=1 \
>  -device ioh3420,bus=pcie.0,id=pcie.2,port=2,chassis=2 \
>  -device virtio-scsi-pci,disable-modern=off,disable-legacy=on,bus=pcie.1,addr=00.0,id=scsi0
> \
>  -drive file=/home/drjones/.local/libvirt/images/fedora.qcow2,
> format=qcow2,if=none,id=drive-scsi0-0-0-0 \
>  -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-
> scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 \
>  -netdev user,id=hostnet0 \
>  -device virtio-net-pci,disable-modern=off,disable-legacy=on,bus=
> pcie.2,addr=00.0,netdev=hostnet0,id=net0
>
> I prefer always using virtio-scsi for the disk, but a similar command
> line can be used for a virtio-blk-pci disk.
>
> OK great! Because in Openstack Nova ,AArch64 need to realize the hotplug
only with
the virtio bus, so investigate the virtio-pci. :-)
Thanks again!


> Thanks,
> drew
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-17 17:00   ` Laine Stump
  2016-08-18  7:41     ` Andrew Jones
  2016-08-18 12:10     ` Marcel Apfelbaum
@ 2016-08-18 12:43     ` Kevin Zhao
  2016-08-18 13:51       ` Andrea Bolognani
  2016-08-18 21:26       ` Laine Stump
  2 siblings, 2 replies; 18+ messages in thread
From: Kevin Zhao @ 2016-08-18 12:43 UTC (permalink / raw)
  To: Laine Stump
  Cc: QEMU Developers, Andrew Jones, qemu-arm, Thomas Hanson,
	Peter Maydell, Gema Gomez-Solano, Marcel Apfelbaum,
	Andrea Bolognani

Hi Laine,
    Thanks :-) I also has a little questions below.

On 18 August 2016 at 01:00, Laine Stump <laine@redhat.com> wrote:

> On 08/17/2016 12:13 PM, Andrew Jones wrote:
>
>> On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
>>
>>> Hi all,
>>>       Now I'm investigating net device hot plug and disk hotplug for
>>> AArch64. For virtio , the default address is virtio-mmio. After Libvirt
>>> 1.3.5, user can explicitly specify the address-type to pci and so libvirt
>>> will pass the virtio-pci parameters to the Qemu.
>>>       Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
>>> Libvirt version is 1.3.5.
>>>       For net-device, I change the address-type to pci, and libvirt pass
>>> the
>>> command below:
>>>       -device
>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:
>>> 25,bus=pci.2,addr=0x1
>>>
>>>       After booting, the eth0 device disappear(eth0 occur when the
>>> address
>>> is virtio-mmio),
>>> but I can find another net-device enp2s1, also it can't work for dhcp:
>>> Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
>>> device
>>> I'm not sure whether it worked.
>>>
>>>       For disk device,* when I change the address-type to pci, the whole
>>> qemu command is :*
>>> https://paste.fedoraproject.org/409553/,  but the VM can not boot
>>> successfully. Does Qemu not support device disk of virtio-pci in AArch64
>>> just as it in X86_64?
>>>       Thanks~Since I am not very familiar with Qemu, really looking
>>> forward
>>> to your response.
>>>
>>> Best Regards,
>>> Kevin Zhao
>>>
>> libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
>> the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
>> host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
>> go in order to configure a base/standard mach-virt PCIe machine.
>>
>
> Well, you can do it now, but you have to manually assign the PCI addresses
> of devices (and if you want hotplug you need to live with Intel/TI-specific
> PCIe controllers).

OK. It seems that Qemu will drop PCI for machine-virt and turning to PCIE
in the future.
Do I need to do more for  Intel/TI-specific PCIe controllers?  what do I
need to add in the guest XML or more?

>
>
>> 1) If we want to support both PCIe devices and PCI, then things are messy.
>>     Currently we propose dropping PCI support. mach-virt pretty much
>>     exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
>>
>
> I have a libvirt patch just about ACKed for pushing upstream that will
> automatically assign virtio-pci devices to a PCIe slot (if the qemu binary
> supports virtio-1.0):
>
> https://www.redhat.com/archives/libvir-list/2016-August/msg00852.html
>

What's the minimum version of  Qemu that support virito-1.0? Does Qemu 2.6
works?
Also as  I see your patch for automatically assign virtio-pci to PCIE
slot,after it merged  I think thing will go much more easier.
Now I will manually add the slots and bus to pcie. Because I am not
familiar with it,  if it convenient, could you give me an available xml
file which PCIE disk and PCIE
net device can work for machine virt ?

Thanks~

Separate patches do the same for the e1000e emulated network device (which
> you probably don't care about) and the nec-usb-xhci (USB3) controller (more
> useful):
>
> https://www.redhat.com/archives/libvir-list/2016-August/msg00732.html
>
> Once these are in place, the only type of device of any consequence that I
> can see still having no PCIe alternative is audio (even though only the
> virgl video device is PCIe, libvirt has always assigned the primary video
> to slot 1 on pcie-root anyway (although you shouldn't put a legacy PCI
> device on a pcie-root-port or pcie-switch-downstream-port, it is acceptable
> to plug it directly into pcie-root (as long as you know you won't need to
> hotplug it).
>
> 2) root complex ports, switches (upstream/downstream ports) are currently
>>     based on Intel parts. Marcel is thinking about creating generic
>> models.
>>
>
> I say this every time it comes up, so just to be consistent: +1 :-)
>
> 3) libvirt needs to learn how to plug everything together, in proper PCIe
>>     fashion, leaving holes for hotplug.
>>
>
> See above about virtio, although that doesn't cover the whole story. The
> other part (which I'm working on right now) is that libvirt needs to
> automatically add pcie-root-port, pcie-switch-upstream-port, and
> pcie-switch-downstream-port devices as necessary. With the patches I
> mentioned above, you still have to manually add enough pcie-*-port
> controllers to the config, and then libvirt will plug the PCIe devices into
> the right place. This is simple enough to do, but it does require
> intervention.
>
> As far as leaving holes for hotplug, there's actually still a bit of an
> open question there - with machinetypes that use only legacy-PCI, *all*
> slots are hotpluggable, and they're added 31 at a time, so there was never
> any question about which slots were hotpluggable, and it would be very rare
> to end up with a configuration that had 0 free slots available for hotplug
> (actually libvirt would always make sure there was at least one, but in
> practice there would be many more open slots). With PCIe-capable
> machinetypes that is changed, since the root complex (pcie-root) doesn't
> support hotplug, and new slots are added 1 at a time (pcie-*-port) rather
> than 31 at a time. This means you have to really go out of your way if you
> want open slots for hotplug (and even if you want devices in the machine at
> boot time to be hot-unpluggable).
>
> I'm still not sure just how far we need to go in this regard.  We've
> already decided that, unless manually set to an address on pcie-root by the
> user/management application, all PCI devices will be auto-assigned to a
> slot that supports hotplug. What I'm not sure about is whether we should
> always auto-add an extra pcie-*-root to be sure a device can be hotplugged,
> or if we should admit that 1 available slot isn't good enough for all
> situations, so we should instead just leave it up to the user/management to
> manually add extra ports if they think they'll want to hotplug something
> later.
>
>
> 4) Probably more... I forget all the different issues we discovered when
>>     we started playing with this a few months ago.
>>
>> The good news is that x86 folk want all the same things for the q35 model.
>> mach-virt enthusiasts like us get to ride along pretty much for free.
>>
>> So, using virtio-pci with mach-virt and libvirt isn't possible right now,
>> not without manual changes to the XML. It might be nice to document how to
>> manually convert a guest, so developers who want to use virtio-pci don't
>> have to abandon libvirt. I'd have to look into that, or ask one of our
>> libvirt friends to help. Certainly the instructions would be for latest
>> libvirt though.
>>
>> Finally, FWIW, with a guest kernel of 4.6.4-301.fc24.aarch64. The
>> following qemu command line works for me.
>> (notice the use of PCIe), and my network interface gets labeled enp0s1.
>>
>> $QEMU -machine virt-2.6,accel=kvm -cpu host \
>>   -m 1024 -smp 1 -nographic \
>>   -bios /usr/share/AAVMF/AAVMF_CODE.fd \
>>   -device ioh3420,bus=pcie.0,id=pcie.1,port=1,chassis=1 \
>>   -device ioh3420,bus=pcie.0,id=pcie.2,port=2,chassis=2 \
>>   -device virtio-scsi-pci,disable-modern=off,disable-legacy=on,bus=pcie.1,addr=00.0,id=scsi0
>> \
>>   -drive file=/home/drjones/.local/libvirt/images/fedora.qcow2,format
>> =qcow2,if=none,id=drive-scsi0-0-0-0 \
>>   -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-sc
>> si0-0-0-0,id=scsi0-0-0-0,bootindex=1 \
>>   -netdev user,id=hostnet0 \
>>   -device virtio-net-pci,disable-modern=off,disable-legacy=on,bus=pcie
>> .2,addr=00.0,netdev=hostnet0,id=net0
>>
>> I prefer always using virtio-scsi for the disk, but a similar command
>> line can be used for a virtio-blk-pci disk.
>>
>> Thanks,
>> drew
>>
>
>
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-18 12:30   ` Kevin Zhao
@ 2016-08-18 12:51     ` Kevin Zhao
  0 siblings, 0 replies; 18+ messages in thread
From: Kevin Zhao @ 2016-08-18 12:51 UTC (permalink / raw)
  To: Andrew Jones
  Cc: QEMU Developers, qemu-arm, Thomas Hanson, Peter Maydell,
	Gema Gomez-Solano, Marcel Apfelbaum, Andrea Bolognani,
	Laine Stump

Hi All,
    Thanks for your all kindly response. Really Great and helpful :-)

On 18 August 2016 at 20:30, Kevin Zhao <kevin.zhao@linaro.org> wrote:

> Hi Jones:
>    Thanks~It is great that Qemu has been working on that :-)
>
> On 18 August 2016 at 00:13, Andrew Jones <drjones@redhat.com> wrote:
>
>> On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
>> > Hi all,
>> >      Now I'm investigating net device hot plug and disk hotplug for
>> > AArch64. For virtio , the default address is virtio-mmio. After Libvirt
>> > 1.3.5, user can explicitly specify the address-type to pci and so
>> libvirt
>> > will pass the virtio-pci parameters to the Qemu.
>> >      Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
>> > Libvirt version is 1.3.5.
>> >      For net-device, I change the address-type to pci, and libvirt pass
>> the
>> > command below:
>> >      -device
>> > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:
>> 25,bus=pci.2,addr=0x1
>> >
>> >      After booting, the eth0 device disappear(eth0 occur when the
>> address
>> > is virtio-mmio),
>> > but I can find another net-device enp2s1, also it can't work for dhcp:
>> > Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
>> > device
>> > I'm not sure whether it worked.
>> >
>> >      For disk device,* when I change the address-type to pci, the whole
>> > qemu command is :*
>> > https://paste.fedoraproject.org/409553/,  but the VM can not boot
>> > successfully. Does Qemu not support device disk of virtio-pci in AArch64
>> > just as it in X86_64?
>> >      Thanks~Since I am not very familiar with Qemu, really looking
>> forward
>> > to your response.
>> >
>> > Best Regards,
>> > Kevin Zhao
>>
>> libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
>> the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
>> host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
>> go in order to configure a base/standard mach-virt PCIe machine.
>>
>>
> Yeah, I am changing to libvirt 2.1.0, I find that I should use PCI by
> manually add
> the slots and bus to it.
>
> 1) If we want to support both PCIe devices and PCI, then things are messy.
>>    Currently we propose dropping PCI support. mach-virt pretty much
>>    exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
>> 2) root complex ports, switches (upstream/downstream ports) are currently
>>    based on Intel parts. Marcel is thinking about creating generic models.
>> 3) libvirt needs to learn how to plug everything together, in proper PCIe
>>    fashion, leaving holes for hotplug.
>> 4) Probably more... I forget all the different issues we discovered when
>>    we started playing with this a few months ago.
>>
>> The good news is that x86 folk want all the same things for the q35 model.
>> mach-virt enthusiasts like us get to ride along pretty much for free.
>>
>> So, using virtio-pci with mach-virt and libvirt isn't possible right now,
>> not without manual changes to the XML. It might be nice to document how to
>> manually convert a guest, so developers who want to use virtio-pci don't
>> have to abandon libvirt. I'd have to look into that, or ask one of our
>> libvirt friends to help. Certainly the instructions would be for latest
>> libvirt though.
>>
>>
> As you said,  that means that I can use PCIe as the bus for disk and
> net-device.
> I will try using pcie in libvirt. I will try the newest version of
> libvirt.
> Do I need to change <address type = 'pcie'> to enable it in AArch64 ?
> The pcie bus will  be automatically assigned in libvirt ?
>

Just to remind that I made a mistake here, as Laine said now just manually
adding pcie
slots can work. He is going on push a patch to libvirt about automatically
assigment.

>
>
>> Finally, FWIW, with a guest kernel of 4.6.4-301.fc24.aarch64. The
>> following qemu command line works for me.
>> (notice the use of PCIe), and my network interface gets labeled enp0s1.
>>
>> $QEMU -machine virt-2.6,accel=kvm -cpu host \
>>  -m 1024 -smp 1 -nographic \
>>  -bios /usr/share/AAVMF/AAVMF_CODE.fd \
>>  -device ioh3420,bus=pcie.0,id=pcie.1,port=1,chassis=1 \
>>  -device ioh3420,bus=pcie.0,id=pcie.2,port=2,chassis=2 \
>>  -device virtio-scsi-pci,disable-modern=off,disable-legacy=on,bus=pcie.1,addr=00.0,id=scsi0
>> \
>>  -drive file=/home/drjones/.local/libvirt/images/fedora.qcow2,format
>> =qcow2,if=none,id=drive-scsi0-0-0-0 \
>>  -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-sc
>> si0-0-0-0,id=scsi0-0-0-0,bootindex=1 \
>>  -netdev user,id=hostnet0 \
>>  -device virtio-net-pci,disable-modern=off,disable-legacy=on,bus=pcie
>> .2,addr=00.0,netdev=hostnet0,id=net0
>>
>> I prefer always using virtio-scsi for the disk, but a similar command
>> line can be used for a virtio-blk-pci disk.
>>
>> OK great! Because in Openstack Nova ,AArch64 need to realize the hotplug
> only with
> the virtio bus, so investigate the virtio-pci. :-)
> Thanks again!
>
>
>> Thanks,
>> drew
>>
>
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-18 12:43     ` Kevin Zhao
@ 2016-08-18 13:51       ` Andrea Bolognani
  2016-08-24  1:52         ` Kevin Zhao
  2016-08-18 21:26       ` Laine Stump
  1 sibling, 1 reply; 18+ messages in thread
From: Andrea Bolognani @ 2016-08-18 13:51 UTC (permalink / raw)
  To: Kevin Zhao, Laine Stump
  Cc: QEMU Developers, Andrew Jones, qemu-arm, Thomas Hanson,
	Peter Maydell, Gema Gomez-Solano, Marcel Apfelbaum

On Thu, 2016-08-18 at 20:43 +0800, Kevin Zhao wrote:
> What's the minimum version of  Qemu that support virito-1.0?
> Does Qemu 2.6 works? 

2.6 definitely has virtio 1.0 support, however libvirt does
not yet allow you to control whether a device uses 0.9, 1.0
or both. The default for 2.6 should be both IIRC.

> Now I will manually add the slots and bus to pcie. Because
> I am not familiar with it,  if it convenient, could you give
> me an available xml file which PCIE disk and PCIE
> net device can work for machine virt ?

The XML you're looking for is at the end of this message.

Note that a Fedora 24 guest configured this way will not
boot at all if the machine type is virt-2.6; on the other
hand, an identically-configured RHEL 7.3 guest will boot
even with virt-2.6, but both the disk and the network
adapter will be legacy PCI instead of PCIe.


<domain type='kvm'>
  <name>abologna-f24</name>
  <uuid>f6d0428b-a034-4c4e-8ef2-f12f6aa9cab0</uuid>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <os>
    <type arch='aarch64' machine='virt-2.7'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/AAVMF/AAVMF_CODE.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/abologna-f24_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>
  <features>
    <gic version='2'/>
  </features>
  <cpu mode='host-passthrough'/>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/abologna-qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/abologna-f24.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </disk>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='ioh3420'/>
      <target chassis='1' port='0x8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='ioh3420'/>
      <target chassis='2' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='virtio-mmio'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:10:07:41'/>
      <source network='default'/>
      <model type='virtio'/>
      <rom bar='off'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <source mode='bind'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
  </devices>
</domain>

-- 
Andrea Bolognani / Red Hat / Virtualization

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-18  7:41     ` Andrew Jones
@ 2016-08-18 21:11       ` Laine Stump
  0 siblings, 0 replies; 18+ messages in thread
From: Laine Stump @ 2016-08-18 21:11 UTC (permalink / raw)
  To: QEMU Developers, qemu-arm
  Cc: Andrew Jones, Peter Maydell, Marcel Apfelbaum, Kevin Zhao,
	Gema Gomez-Solano, Andrea Bolognani, Thomas Hanson

On 08/18/2016 03:41 AM, Andrew Jones wrote:
> On Wed, Aug 17, 2016 at 01:00:05PM -0400, Laine Stump wrote:
>> On 08/17/2016 12:13 PM, Andrew Jones wrote:
>>> On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
>>>> Hi all,
>>>>       Now I'm investigating net device hot plug and disk hotplug for
>>>> AArch64. For virtio , the default address is virtio-mmio. After Libvirt
>>>> 1.3.5, user can explicitly specify the address-type to pci and so libvirt
>>>> will pass the virtio-pci parameters to the Qemu.
>>>>       Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
>>>> Libvirt version is 1.3.5.
>>>>       For net-device, I change the address-type to pci, and libvirt pass the
>>>> command below:
>>>>       -device
>>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:25,bus=pci.2,addr=0x1
>>>>
>>>>       After booting, the eth0 device disappear(eth0 occur when the address
>>>> is virtio-mmio),
>>>> but I can find another net-device enp2s1, also it can't work for dhcp:
>>>> Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
>>>> device
>>>> I'm not sure whether it worked.
>>>>
>>>>       For disk device,* when I change the address-type to pci, the whole
>>>> qemu command is :*
>>>> https://paste.fedoraproject.org/409553/,  but the VM can not boot
>>>> successfully. Does Qemu not support device disk of virtio-pci in AArch64
>>>> just as it in X86_64?
>>>>       Thanks~Since I am not very familiar with Qemu, really looking forward
>>>> to your response.
>>>>
>>>> Best Regards,
>>>> Kevin Zhao
>>> libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
>>> the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
>>> host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
>>> go in order to configure a base/standard mach-virt PCIe machine.
>>
>> Well, you can do it now, but you have to manually assign the PCI addresses
>> of devices (and if you want hotplug you need to live with Intel/TI-specific
>> PCIe controllers).
>>
>>
>>>
>>> 1) If we want to support both PCIe devices and PCI, then things are messy.
>>>     Currently we propose dropping PCI support. mach-virt pretty much
>>>     exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
>>
>> I have a libvirt patch just about ACKed for pushing upstream that will
>> automatically assign virtio-pci devices to a PCIe slot (if the qemu binary
>> supports virtio-1.0):
>>
>> https://www.redhat.com/archives/libvir-list/2016-August/msg00852.html
>>
>> Separate patches do the same for the e1000e emulated network device (which
>> you probably don't care about) and the nec-usb-xhci (USB3) controller (more
>> useful):
>>
>> https://www.redhat.com/archives/libvir-list/2016-August/msg00732.html
>>
>
> Thanks for the update Laine. This sounds great to me. With those patches
> we can switch from virtio-mmio to virtio-pci easily, even if we're still
> missing hotplug a bit longer. What limit do we have for the number of
> devices, when we don't have any switches? I think I experimented once and
> found it to be 7.

Theoretically you should be able to put something in each slot of 
pcie-root, and there are 31 slots (but slot 0x1f is always used by 
integrated devices, slot 1 is usually used by video, and slot 0x1D is 
usually used by a USB controller). Anyway, you should be able to get a 
lot more than 7 devices (one potential problem is that if you add a PCI 
controller that has a device plugged into it which requests IO port 
space, that will get chewed up very quickly. That's not an issue if you 
are connecting endpoints directly to pcie-root though.


>
>> Once these are in place, the only type of device of any consequence that I
>> can see still having no PCIe alternative is audio (even though only the
>> virgl video device is PCIe, libvirt has always assigned the primary video to
>> slot 1 on pcie-root anyway (although you shouldn't put a legacy PCI device
>> on a pcie-root-port or pcie-switch-downstream-port, it is acceptable to plug
>> it directly into pcie-root (as long as you know you won't need to hotplug
>> it).
>>
>>> 2) root complex ports, switches (upstream/downstream ports) are currently
>>>     based on Intel parts. Marcel is thinking about creating generic models.
>>
>> I say this every time it comes up, so just to be consistent: +1 :-)
>>
>>> 3) libvirt needs to learn how to plug everything together, in proper PCIe
>>>     fashion, leaving holes for hotplug.
>>
>> See above about virtio, although that doesn't cover the whole story. The
>> other part (which I'm working on right now) is that libvirt needs to
>> automatically add pcie-root-port, pcie-switch-upstream-port, and
>> pcie-switch-downstream-port devices as necessary. With the patches I
>> mentioned above, you still have to manually add enough pcie-*-port
>> controllers to the config, and then libvirt will plug the PCIe devices into
>> the right place. This is simple enough to do, but it does require
>> intervention.
>
> OK, so we want this to support hotplug and eventually chain switches,
> bumping our device limit up higher and higher. To what? I'm not sure,
> I guess we're still limited by address space.

As long as the endpoint devices don't require IO port space, the 
BIOS/firmware shouldn't try to reserve it, and the limit will be "more 
than you would ever need".

>
>>
>> As far as leaving holes for hotplug, there's actually still a bit of an open
>> question there - with machinetypes that use only legacy-PCI, *all* slots are
>> hotpluggable, and they're added 31 at a time, so there was never any
>> question about which slots were hotpluggable, and it would be very rare to
>> end up with a configuration that had 0 free slots available for hotplug
>> (actually libvirt would always make sure there was at least one, but in
>> practice there would be many more open slots). With PCIe-capable
>> machinetypes that is changed, since the root complex (pcie-root) doesn't
>> support hotplug, and new slots are added 1 at a time (pcie-*-port) rather
>> than 31 at a time. This means you have to really go out of your way if you
>> want open slots for hotplug (and even if you want devices in the machine at
>> boot time to be hot-unpluggable).
>>
>> I'm still not sure just how far we need to go in this regard.  We've already
>> decided that, unless manually set to an address on pcie-root by the
>> user/management application, all PCI devices will be auto-assigned to a slot
>> that supports hotplug. What I'm not sure about is whether we should always
>> auto-add an extra pcie-*-root to be sure a device can be hotplugged, or if
>> we should admit that 1 available slot isn't good enough for all situations,
>> so we should instead just leave it up to the user/management to manually add
>> extra ports if they think they'll want to hotplug something later.
>
> Hmm... Maybe the tools can make this easier by offering an option to
> provide N extra ports.

I've thought about this, and there have even been emails about it, but I 
don't know if it could ever be accepted into libvirt. My idea was to 
have an attribute that says "always maintain X hotpluggable slots when 
coldplugging new devices". I have a feeling that will get as warm of a 
reception as my proposed "hotpluggable='no'" attribute for devices (i.e. 
not warm at all :-))

The problem is that if you just tell the user to put in extra 
pcie-*-port controllers to allow for hotplug, those will eventually be 
eaten up, and then the next time you start the guest you'll again have 
no open hotpluggable slots again (even though restarting would have 
given you the opportunity to automatically add new *-port controllers).


>
> Hmm2... I think I agree that we don't need to worry too much about
> providing free ports for hotplug (maybe just one for fun). With
> virtio-scsi we can plug disks already.


until you run out of open units/buses/whatever they're called in 
virtio-scsi (I'm too lazy to look it up right now). And then you have 
the same problem.

We really can never solve the problem, we can only *delay* it, and the 
more we try to delay it the more convoluted the code gets. And of course 
the same number that is "not enough" for some users is "too much" for 
others. That's why I've been wondering if maybe we should just throw up 
our hands and punt (i.e. unabashedly document "if you want slots 
available for hotplug, you'll need to add them. Of course *even that* 
doesn't work unless you add the hotpluggable slots in a separate change 
from any endpoint devices; otherwise the new endpoint devices would just 
be immediately assigned to the new slots you added "for future hotplug" 
:-/).


> If we want to provide multiple
> virtio-net devices for the price of one port, we can enable mutlifunction.


Except that you can't hotplug/unplug the functions independent of each 
other - all the functions in one slot must be plugged/unplugged together.

> And, iirc, there's work to get ARI functioning, allowing mutlifunction to
> go nuts (assuming I understand its purpose correctly)

I keep forgetting all the acronyms. Is ARI the one that allows you (on 
controllers that only have a single slot, e.g. pcie-root-port) to 
interpret the byte that's normally a 5 bit slot# + 3 bit function# as a 
single 8 bit function#? (The idea being that you still have only a 
single slot, but that slot has 256 functions). There again, you still 
have the "hotplug as a single unit" problem, only the effects are 
multiplied because there are so many devices.


>
> So maybe the default config just needs 3 ports?
>  1 virtio-scsi with as many disks as requested
>  1 virtio-net with as many functions as nics are requested
>  1 extra port

None of those are in any default config created by libvirt. In a libvirt 
config with no specified devices (and no specified "anti-devices" such 
as "<controller type='usb' model='none'/>"), there is just the 
integrated ACHI (sata) controller, a USB controller (usually), and a 
<balloon> device (not sure why that's considered so important that it's 
in there by default, but it is).

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-18 12:10     ` Marcel Apfelbaum
@ 2016-08-18 21:20       ` Laine Stump
  0 siblings, 0 replies; 18+ messages in thread
From: Laine Stump @ 2016-08-18 21:20 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Marcel Apfelbaum, Peter Maydell, Andrew Jones, Kevin Zhao,
	Gema Gomez-Solano, Marcel Apfelbaum, Andrea Bolognani,
	Thomas Hanson, qemu-arm

On 08/18/2016 08:10 AM, Marcel Apfelbaum wrote:
> On 08/17/2016 08:00 PM, Laine Stump wrote:

>
>  What I'm not sure about is whether we should always auto-add an extra
> pcie-*-root to be sure a device can be hotplugged, or if we should admit
> that 1
>> available slot isn't good enough for all situations, so we should
>> instead just leave it up to the user/management to manually add extra
>> ports if they think they'll want to hotplug something later.
>
> Why not? Leaving 1 or 2 PCIe ports should be enough. On each port you
> can hotplug a switch with several downstream ports. You can continue
> nesting the switches up to depth of 6-7 I think.

When did qemu start supporting hotplug of pcie switch ports? My 
understanding is that in real hardware the only way this is supported is 
to plug in the entire "upstream+downstream+downstream+..." set as a 
single unit, since there is no mechanism for guest kernel to notify the 
upstream port that a new downstream port has been attached to it (or 
something like that; I'm vague on the details). From the other end, qemu 
can only hotplug a single PCI device at a time, so by the time we get to 
the point of plugging in the downstream ports, the upstream port is 
already cemented in place by the guest kernel.

I think that would be a really desirable feature though. Maybe qemu 
could queue up any downstream-ports which are pointing to an 
as-yet-nonexistent upstream-port id, then when the upstream-port with 
the proper id is finally attached, it could send the right magic to the 
guest (similar to the way it allows hotplugging all non-0 functions 
first, then takes action when function 0 is hotplugged).

If that was available, then yes, it would make perfect sense for libvirt 
to simply always make sure at least one empty pcie-*-port was available. 
If you have plans of doing something to support hotplugging a 
pcie-switch-* collection, then that's what I'll do.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-18 12:43     ` Kevin Zhao
  2016-08-18 13:51       ` Andrea Bolognani
@ 2016-08-18 21:26       ` Laine Stump
  1 sibling, 0 replies; 18+ messages in thread
From: Laine Stump @ 2016-08-18 21:26 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Kevin Zhao, Marcel Apfelbaum, Peter Maydell, Andrew Jones,
	Gema Gomez-Solano, Andrea Bolognani, Thomas Hanson, qemu-arm

On 08/18/2016 08:43 AM, Kevin Zhao wrote:
> Hi Laine,
>     Thanks :-) I also has a little questions below.
>
> On 18 August 2016 at 01:00, Laine Stump <laine@redhat.com> wrote:
>
>> On 08/17/2016 12:13 PM, Andrew Jones wrote:
>>
>>> On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
>>>
>>>> Hi all,
>>>>       Now I'm investigating net device hot plug and disk hotplug for
>>>> AArch64. For virtio , the default address is virtio-mmio. After Libvirt
>>>> 1.3.5, user can explicitly specify the address-type to pci and so libvirt
>>>> will pass the virtio-pci parameters to the Qemu.
>>>>       Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
>>>> Libvirt version is 1.3.5.
>>>>       For net-device, I change the address-type to pci, and libvirt pass
>>>> the
>>>> command below:
>>>>       -device
>>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:
>>>> 25,bus=pci.2,addr=0x1
>>>>
>>>>       After booting, the eth0 device disappear(eth0 occur when the
>>>> address
>>>> is virtio-mmio),
>>>> but I can find another net-device enp2s1, also it can't work for dhcp:
>>>> Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
>>>> device
>>>> I'm not sure whether it worked.
>>>>
>>>>       For disk device,* when I change the address-type to pci, the whole
>>>> qemu command is :*
>>>> https://paste.fedoraproject.org/409553/,  but the VM can not boot
>>>> successfully. Does Qemu not support device disk of virtio-pci in AArch64
>>>> just as it in X86_64?
>>>>       Thanks~Since I am not very familiar with Qemu, really looking
>>>> forward
>>>> to your response.
>>>>
>>>> Best Regards,
>>>> Kevin Zhao
>>>>
>>> libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
>>> the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
>>> host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
>>> go in order to configure a base/standard mach-virt PCIe machine.
>>>
>>
>> Well, you can do it now, but you have to manually assign the PCI addresses
>> of devices (and if you want hotplug you need to live with Intel/TI-specific
>> PCIe controllers).
>
> OK. It seems that Qemu will drop PCI for machine-virt and turning to PCIE
> in the future.

This isn't a qemu issue, but a libvirt issue.

> Do I need to do more for  Intel/TI-specific PCIe controllers?


>  what do I
> need to add in the guest XML or more?

As soon as my first set of patches is pushed, what you'll need to do is 
to add:

     <controller type='pci' model='pcie-root-port'/>

for each virtio device.

As soon as I've figured out a useful algorithm for adding the pcie 
controllers automatically, you won't need to do anything at all. (Keep 
in mind that any existing configurations will maintain their original 
config. If you want an existing configuration to be switched to using 
the new recommended (pcie) bus topology you will need to replace every 
instance of:

     <address type='pci' domain='0x0000' ......../>

with a plain:

     <address type='pci'/>

This will force libvirt to reassign PCI addresses for the devices.

>
>>
>>
>>> 1) If we want to support both PCIe devices and PCI, then things are messy.
>>>     Currently we propose dropping PCI support. mach-virt pretty much
>>>     exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
>>>
>>
>> I have a libvirt patch just about ACKed for pushing upstream that will
>> automatically assign virtio-pci devices to a PCIe slot (if the qemu binary
>> supports virtio-1.0):
>>
>> https://www.redhat.com/archives/libvir-list/2016-August/msg00852.html
>>
>
> What's the minimum version of  Qemu that support virito-1.0? Does Qemu 2.6
> works?

qemu as old as 2.4.0 has the "--disable-legacy" option for virtio 
devices. I don't know if it was actually working properly back that far.


> Also as  I see your patch for automatically assign virtio-pci to PCIE
> slot,after it merged  I think thing will go much more easier.
> Now I will manually add the slots and bus to pcie. Because I am not
> familiar with it,  if it convenient, could you give me an available xml
> file which PCIE disk and PCIE
> net device can work for machine virt ?

I think Andrea sent an example.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-18  6:38     ` Andrew Jones
@ 2016-08-19 15:43       ` Andrea Bolognani
  2016-08-19 17:51         ` Laine Stump
  0 siblings, 1 reply; 18+ messages in thread
From: Andrea Bolognani @ 2016-08-19 15:43 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Kevin Zhao, Peter Maydell, Marcel Apfelbaum, Gema Gomez-Solano,
	QEMU Developers, Thomas Hanson, qemu-arm, Laine Stump

On Thu, 2016-08-18 at 08:38 +0200, Andrew Jones wrote:
> > > Finally, FWIW, with a guest kernel of 4.6.4-301.fc24.aarch64. The
> > > following qemu command line works for me.
> > > (notice the use of PCIe), and my network interface gets labeled enp0s1.
> > >  
> > > $QEMU -machine virt-2.6,accel=kvm -cpu host \
> > >   -m 1024 -smp 1 -nographic \
> > >   -bios /usr/share/AAVMF/AAVMF_CODE.fd \
> > >   -device ioh3420,bus=pcie.0,id=pcie.1,port=1,chassis=1 \
> > >   -device ioh3420,bus=pcie.0,id=pcie.2,port=2,chassis=2 \
> > >   -device virtio-scsi-pci,disable-modern=off,disable-legacy=on,bus=pcie.1,addr=00.0,id=scsi0 \
> > >   -drive file=/home/drjones/.local/libvirt/images/fedora.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 \
> > >   -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 \
> > >   -netdev user,id=hostnet0 \
> > >   -device virtio-net-pci,disable-modern=off,disable-legacy=on,bus=pcie.2,addr=00.0,netdev=hostnet0,id=net0
> > >  
> > > I prefer always using virtio-scsi for the disk, but a similar command
> > > line can be used for a virtio-blk-pci disk.
> > 
> > Does the same command line work if you don't specify any of
> > the disable-* options?
> > 
> > I'm asking because I tried running a Fedora 24 guest through
> > libvirt, which doesn't support those options yet, and I get
> > 
> >   virtio_blk virtio2: virtio: device uses modern interface but
> >                               does not have VIRTIO_F_VERSION_1
> >   virtio_blk: probe of virtio2 failed with error -22
> 
> Doesn't work for me either. I can only boot with disable-modern=off,
> disable-legacy=on (at least when building my config the way I try to
> build it...) I presume that's a guest kernel issue.

I tried Fedora 24 and Debian testing, and for both of them
the result is the same: I can only boot the guest if I'm
setting up a legacy-free PCIe topology and use virt-2.7 to
obtain virtio-1.0 devices (see below); for every other
permutation of

  { PCI topology, PCIe topology } x { virt-2.6, virt-2.7 }

the guest doesn't boot at all.

On the other hand, a RHEL 7.3 guest was able to boot *every
single time*, even though the result was in some cases
quite questionable (eg. legacy PCI devices plugged into
ioh3420 ports).

> > Isn't the default for 2.6 disable-modern=off,
> > disable-legacy=off? Or was that 2.7? I tried both anyway ;)
> 
> Dunno. With the command line getting longer all the time, I just
> have a script that generates one that works for me, and haven't
> worried much about the defaults...

So I thought the default for 2.6 was supposed to be

  disable-modern=off,disable-legacy=off  [0.9+1.0]

but it turns out it's actually

  disable-modern=on,disable-legacy=off       [0.9]

whereas the default for 2.7 is

  disable-modern=off,disable-legacy=on       [1.0]

Is the idea that there would be a QEMU release with both
0.9 and 1.0 enabled by default something that I just
imagined? Or did the plan just change at some point?

-- 
Andrea Bolognani / Red Hat / Virtualization

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-19 15:43       ` Andrea Bolognani
@ 2016-08-19 17:51         ` Laine Stump
  0 siblings, 0 replies; 18+ messages in thread
From: Laine Stump @ 2016-08-19 17:51 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Andrea Bolognani, Andrew Jones, Kevin Zhao, Peter Maydell,
	Marcel Apfelbaum, Gema Gomez-Solano, Thomas Hanson, qemu-arm

On 08/19/2016 11:43 AM, Andrea Bolognani wrote:
> On Thu, 2016-08-18 at 08:38 +0200, Andrew Jones wrote:
>>>> Finally, FWIW, with a guest kernel of 4.6.4-301.fc24.aarch64. The
>>>> following qemu command line works for me.
>>>> (notice the use of PCIe), and my network interface gets labeled enp0s1.
>>>>    
>>>> $QEMU -machine virt-2.6,accel=kvm -cpu host \
>>>>     -m 1024 -smp 1 -nographic \
>>>>     -bios /usr/share/AAVMF/AAVMF_CODE.fd \
>>>>     -device ioh3420,bus=pcie.0,id=pcie.1,port=1,chassis=1 \
>>>>     -device ioh3420,bus=pcie.0,id=pcie.2,port=2,chassis=2 \
>>>>     -device virtio-scsi-pci,disable-modern=off,disable-legacy=on,bus=pcie.1,addr=00.0,id=scsi0 \
>>>>     -drive file=/home/drjones/.local/libvirt/images/fedora.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 \
>>>>     -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 \
>>>>     -netdev user,id=hostnet0 \
>>>>     -device virtio-net-pci,disable-modern=off,disable-legacy=on,bus=pcie.2,addr=00.0,netdev=hostnet0,id=net0
>>>>    
>>>> I prefer always using virtio-scsi for the disk, but a similar command
>>>> line can be used for a virtio-blk-pci disk.
>>>   
>>> Does the same command line work if you don't specify any of
>>> the disable-* options?
>>>   
>>> I'm asking because I tried running a Fedora 24 guest through
>>> libvirt, which doesn't support those options yet, and I get
>>>   
>>>     virtio_blk virtio2: virtio: device uses modern interface but
>>>                                 does not have VIRTIO_F_VERSION_1
>>>     virtio_blk: probe of virtio2 failed with error -22
>>   
>> Doesn't work for me either. I can only boot with disable-modern=off,
>> disable-legacy=on (at least when building my config the way I try to
>> build it...) I presume that's a guest kernel issue.
> I tried Fedora 24 and Debian testing, and for both of them
> the result is the same: I can only boot the guest if I'm
> setting up a legacy-free PCIe topology and use virt-2.7 to
> obtain virtio-1.0 devices (see below); for every other
> permutation of
>
>    { PCI topology, PCIe topology } x { virt-2.6, virt-2.7 }
>
> the guest doesn't boot at all.
>
> On the other hand, a RHEL 7.3 guest was able to boot *every
> single time*, even though the result was in some cases
> quite questionable (eg. legacy PCI devices plugged into
> ioh3420 ports).
>
>>> Isn't the default for 2.6 disable-modern=off,
>>> disable-legacy=off? Or was that 2.7? I tried both anyway ;)
>>   
>> Dunno. With the command line getting longer all the time, I just
>> have a script that generates one that works for me, and haven't
>> worried much about the defaults...
> So I thought the default for 2.6 was supposed to be
>
>    disable-modern=off,disable-legacy=off  [0.9+1.0]
>
> but it turns out it's actually
>
>    disable-modern=on,disable-legacy=off       [0.9]

Huh. I had thought it switched to disable-modern=off with 2.6 as well.

>
> whereas the default for 2.7 is
>
>    disable-modern=off,disable-legacy=on       [1.0]
>
> Is the idea that there would be a QEMU release with both
> 0.9 and 1.0 enabled by default something that I just
> imagined? Or did the plan just change at some point?

If I understand the patches correctly, the default for 2.7 will be 
different depending on the type of slot:

pcie-root -> disable-modern=off,disable-legacy=off
legacy PCI -> disable-modern=off,disable-legacy=off
pcie-*-port -> disable-modern=off,disable-legacy=on

The idea is to eliminate the need for the pcie-*-port to reserve IO port 
space (this isn't an issue on pcie-root, so the default is different).

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-18 13:51       ` Andrea Bolognani
@ 2016-08-24  1:52         ` Kevin Zhao
  2016-09-08  6:50           ` Kevin Zhao
  0 siblings, 1 reply; 18+ messages in thread
From: Kevin Zhao @ 2016-08-24  1:52 UTC (permalink / raw)
  To: Andrea Bolognani
  Cc: Laine Stump, QEMU Developers, Andrew Jones, qemu-arm,
	Thomas Hanson, Peter Maydell, Gema Gomez-Solano,
	Marcel Apfelbaum

Great ~ Thanks for your valuable information~
I will try with the xml and any update I will post here.

On 18 August 2016 at 21:51, Andrea Bolognani <abologna@redhat.com> wrote:

> On Thu, 2016-08-18 at 20:43 +0800, Kevin Zhao wrote:
> > What's the minimum version of  Qemu that support virito-1.0?
> > Does Qemu 2.6 works?
>
> 2.6 definitely has virtio 1.0 support, however libvirt does
> not yet allow you to control whether a device uses 0.9, 1.0
> or both. The default for 2.6 should be both IIRC.
>
> > Now I will manually add the slots and bus to pcie. Because
> > I am not familiar with it,  if it convenient, could you give
> > me an available xml file which PCIE disk and PCIE
> > net device can work for machine virt ?
>
> The XML you're looking for is at the end of this message.
>
> Note that a Fedora 24 guest configured this way will not
> boot at all if the machine type is virt-2.6; on the other
> hand, an identically-configured RHEL 7.3 guest will boot
> even with virt-2.6, but both the disk and the network
> adapter will be legacy PCI instead of PCIe.
>
>
> <domain type='kvm'>
>   <name>abologna-f24</name>
>   <uuid>f6d0428b-a034-4c4e-8ef2-f12f6aa9cab0</uuid>
>   <memory unit='KiB'>2097152</memory>
>   <currentMemory unit='KiB'>2097152</currentMemory>
>   <vcpu placement='static'>4</vcpu>
>   <os>
>     <type arch='aarch64' machine='virt-2.7'>hvm</type>
>     <loader readonly='yes' type='pflash'>/usr/share/
> AAVMF/AAVMF_CODE.fd</loader>
>     <nvram>/var/lib/libvirt/qemu/nvram/abologna-f24_VARS.fd</nvram>
>     <boot dev='hd'/>
>   </os>
>   <features>
>     <gic version='2'/>
>   </features>
>   <cpu mode='host-passthrough'/>
>   <clock offset='utc'/>
>   <on_poweroff>destroy</on_poweroff>
>   <on_reboot>restart</on_reboot>
>   <on_crash>restart</on_crash>
>   <devices>
>     <emulator>/usr/libexec/abologna-qemu-kvm</emulator>
>     <disk type='file' device='disk'>
>       <driver name='qemu' type='qcow2'/>
>       <source file='/var/lib/libvirt/images/abologna-f24.qcow2'/>
>       <target dev='vda' bus='virtio'/>
>       <address type='pci' domain='0x0000' bus='0x01' slot='0x00'
> function='0x0'/>
>     </disk>
>     <controller type='pci' index='0' model='pcie-root'/>
>     <controller type='pci' index='1' model='pcie-root-port'>
>       <model name='ioh3420'/>
>       <target chassis='1' port='0x8'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
> function='0x0'/>
>     </controller>
>     <controller type='pci' index='2' model='pcie-root-port'>
>       <model name='ioh3420'/>
>       <target chassis='2' port='0x10'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
> function='0x0'/>
>     </controller>
>     <controller type='virtio-serial' index='0'>
>       <address type='virtio-mmio'/>
>     </controller>
>     <interface type='network'>
>       <mac address='52:54:00:10:07:41'/>
>       <source network='default'/>
>       <model type='virtio'/>
>       <rom bar='off'/>
>       <address type='pci' domain='0x0000' bus='0x02' slot='0x00'
> function='0x0'/>
>     </interface>
>     <serial type='pty'>
>       <target port='0'/>
>     </serial>
>     <console type='pty'>
>       <target type='serial' port='0'/>
>     </console>
>     <channel type='unix'>
>       <source mode='bind'/>
>       <target type='virtio' name='org.qemu.guest_agent.0'/>
>       <address type='virtio-serial' controller='0' bus='0' port='1'/>
>     </channel>
>   </devices>
> </domain>
>
> --
> Andrea Bolognani / Red Hat / Virtualization
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
  2016-08-24  1:52         ` Kevin Zhao
@ 2016-09-08  6:50           ` Kevin Zhao
  0 siblings, 0 replies; 18+ messages in thread
From: Kevin Zhao @ 2016-09-08  6:50 UTC (permalink / raw)
  To: Andrea Bolognani
  Cc: Laine Stump, QEMU Developers, Andrew Jones, qemu-arm,
	Thomas Hanson, Peter Maydell, Gema Gomez-Solano,
	Marcel Apfelbaum

Hi  All,
   As discussed before, upstream are working about PCIE instead of PCI in
AArch64.
   Thanks for your efforts about this on AArch64 :-)
   If it convenient, could you tell me when we plan  to finish this task ?
and which qemu version will support this functions in the future?
   Big Thanks~

Best Regards,
Kevin

On 24 August 2016 at 09:52, Kevin Zhao <kevin.zhao@linaro.org> wrote:

> Great ~ Thanks for your valuable information~
> I will try with the xml and any update I will post here.
>
> On 18 August 2016 at 21:51, Andrea Bolognani <abologna@redhat.com> wrote:
>
>> On Thu, 2016-08-18 at 20:43 +0800, Kevin Zhao wrote:
>> > What's the minimum version of  Qemu that support virito-1.0?
>> > Does Qemu 2.6 works?
>>
>> 2.6 definitely has virtio 1.0 support, however libvirt does
>> not yet allow you to control whether a device uses 0.9, 1.0
>> or both. The default for 2.6 should be both IIRC.
>>
>> > Now I will manually add the slots and bus to pcie. Because
>> > I am not familiar with it,  if it convenient, could you give
>> > me an available xml file which PCIE disk and PCIE
>> > net device can work for machine virt ?
>>
>> The XML you're looking for is at the end of this message.
>>
>> Note that a Fedora 24 guest configured this way will not
>> boot at all if the machine type is virt-2.6; on the other
>> hand, an identically-configured RHEL 7.3 guest will boot
>> even with virt-2.6, but both the disk and the network
>> adapter will be legacy PCI instead of PCIe.
>>
>>
>> <domain type='kvm'>
>>   <name>abologna-f24</name>
>>   <uuid>f6d0428b-a034-4c4e-8ef2-f12f6aa9cab0</uuid>
>>   <memory unit='KiB'>2097152</memory>
>>   <currentMemory unit='KiB'>2097152</currentMemory>
>>   <vcpu placement='static'>4</vcpu>
>>   <os>
>>     <type arch='aarch64' machine='virt-2.7'>hvm</type>
>>     <loader readonly='yes' type='pflash'>/usr/share/AAVMF
>> /AAVMF_CODE.fd</loader>
>>     <nvram>/var/lib/libvirt/qemu/nvram/abologna-f24_VARS.fd</nvram>
>>     <boot dev='hd'/>
>>   </os>
>>   <features>
>>     <gic version='2'/>
>>   </features>
>>   <cpu mode='host-passthrough'/>
>>   <clock offset='utc'/>
>>   <on_poweroff>destroy</on_poweroff>
>>   <on_reboot>restart</on_reboot>
>>   <on_crash>restart</on_crash>
>>   <devices>
>>     <emulator>/usr/libexec/abologna-qemu-kvm</emulator>
>>     <disk type='file' device='disk'>
>>       <driver name='qemu' type='qcow2'/>
>>       <source file='/var/lib/libvirt/images/abologna-f24.qcow2'/>
>>       <target dev='vda' bus='virtio'/>
>>       <address type='pci' domain='0x0000' bus='0x01' slot='0x00'
>> function='0x0'/>
>>     </disk>
>>     <controller type='pci' index='0' model='pcie-root'/>
>>     <controller type='pci' index='1' model='pcie-root-port'>
>>       <model name='ioh3420'/>
>>       <target chassis='1' port='0x8'/>
>>       <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
>> function='0x0'/>
>>     </controller>
>>     <controller type='pci' index='2' model='pcie-root-port'>
>>       <model name='ioh3420'/>
>>       <target chassis='2' port='0x10'/>
>>       <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
>> function='0x0'/>
>>     </controller>
>>     <controller type='virtio-serial' index='0'>
>>       <address type='virtio-mmio'/>
>>     </controller>
>>     <interface type='network'>
>>       <mac address='52:54:00:10:07:41'/>
>>       <source network='default'/>
>>       <model type='virtio'/>
>>       <rom bar='off'/>
>>       <address type='pci' domain='0x0000' bus='0x02' slot='0x00'
>> function='0x0'/>
>>     </interface>
>>     <serial type='pty'>
>>       <target port='0'/>
>>     </serial>
>>     <console type='pty'>
>>       <target type='serial' port='0'/>
>>     </console>
>>     <channel type='unix'>
>>       <source mode='bind'/>
>>       <target type='virtio' name='org.qemu.guest_agent.0'/>
>>       <address type='virtio-serial' controller='0' bus='0' port='1'/>
>>     </channel>
>>   </devices>
>> </domain>
>>
>> --
>> Andrea Bolognani / Red Hat / Virtualization
>>
>
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2016-09-08  6:50 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-17 12:08 [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device? Kevin Zhao
2016-08-17 16:13 ` Andrew Jones
2016-08-17 16:41   ` Andrea Bolognani
2016-08-18  6:38     ` Andrew Jones
2016-08-19 15:43       ` Andrea Bolognani
2016-08-19 17:51         ` Laine Stump
2016-08-17 17:00   ` Laine Stump
2016-08-18  7:41     ` Andrew Jones
2016-08-18 21:11       ` Laine Stump
2016-08-18 12:10     ` Marcel Apfelbaum
2016-08-18 21:20       ` Laine Stump
2016-08-18 12:43     ` Kevin Zhao
2016-08-18 13:51       ` Andrea Bolognani
2016-08-24  1:52         ` Kevin Zhao
2016-09-08  6:50           ` Kevin Zhao
2016-08-18 21:26       ` Laine Stump
2016-08-18 12:30   ` Kevin Zhao
2016-08-18 12:51     ` Kevin Zhao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.