* Failure of hot plugging secondary virtio_blk into q35 Windows 2019
@ 2021-11-01 14:06 Annie.li
2021-11-08 18:53 ` Annie.li
0 siblings, 1 reply; 11+ messages in thread
From: Annie.li @ 2021-11-01 14:06 UTC (permalink / raw)
To: qemu-devel; +Cc: ani, imammedo, jusual
Hello,
I've found an issue when hot-plugging the secondary virtio_blk device
into q35 Windows guest(2019) with upstream qemu 6.1.0(+1 patch). The
first disk can be hot-plugged successfully.
The qemu options for PCIe root port is,
-device
pcie-root-port,port=2,chassis=2,id=pciroot2,bus=pcie.0,addr=0x2,multifunction=on
\
-device
pcie-root-port,port=3,chassis=3,id=pciroot3,bus=pcie.0,addr=0x3,multifunction=on
\
-device
pcie-root-port,port=4,chassis=4,id=pciroot4,bus=pcie.0,addr=0x4,multifunction=on
\
-device
pcie-root-port,port=5,chassis=5,id=pciroot5,bus=pcie.0,addr=0x5,multifunction=on
\
-device
pcie-root-port,port=6,chassis=6,id=pciroot6,bus=pcie.0,addr=0x6,multifunction=on
\
The command to hotplug 1st virtio_blk disk is following, the PCI slot of
the 1st virtio_blk is Pci slot 0(PCI bus 1, device 0, function 0).
drive_add auto
file=block_10.qcow2,format=qcow2,if=none,id=drive10,cache=none
device_add virtio-blk-pci,drive=drive10,id=block-disk10,bus=pciroot2
Following is the related "info mtree" after the 1st virtio_blk device is
hot plugged
memory-region: pci_bridge_pci
0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
00000000febff000-00000000febfffff (prio 1, i/o): virtio-blk-pci-msix
00000000febff000-00000000febff01f (prio 0, i/o): msix-table
00000000febff800-00000000febff807 (prio 0, i/o): msix-pba
0000000fffffc000-0000000fffffffff (prio 1, i/o): virtio-pci
0000000fffffc000-0000000fffffcfff (prio 0, i/o): virtio-pci-common
0000000fffffd000-0000000fffffdfff (prio 0, i/o): virtio-pci-isr
0000000fffffe000-0000000fffffefff (prio 0, i/o): virtio-pci-device
0000000ffffff000-0000000fffffffff (prio 0, i/o): virtio-pci-notify
memory-region: pci_bridge_pci
0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
memory-region: pci_bridge_pci
0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
memory-region: pci_bridge_pci
0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
memory-region: pci_bridge_pci
0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
Right after the secondary virtio_blk device is hot-plugged, a yellow
mark shows on the first virtio_blk device in the Windows guest. The PCI
slot info of the 2nd virtio_blk is Pci slot 0(PCI bus 2, device 0,
function 0). The debug log of Windows virtio_blk driver shows a
"ScsiStopAdapter" adapter control operation is triggered first, and then
"StorSurpriseRemoval". From the following "info mtree", it seems the 2nd
virtio_blk device is occupying the same memory resource as the above 1st
virtio_blk device. Maybe this causes the failure of the 1st virtio_blk
device and then the system assume it is surprisingly removed?
The command to hotplug 2nd virtio_blk disk,
drive_add auto
file=block_11.qcow2,format=qcow2,if=none,id=drive11,cache=none
device_add virtio-blk-pci,drive=drive11,id=block-disk11,bus=pciroot3
Following is the related "info mtree" after the 2nd virtio_blk device is
hot-plugged,
memory-region: pci_bridge_pci
0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
memory-region: pci_bridge_pci
0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
00000000febff000-00000000febfffff (prio 1, i/o): virtio-blk-pci-msix
00000000febff000-00000000febff01f (prio 0, i/o): msix-table
00000000febff800-00000000febff807 (prio 0, i/o): msix-pba
0000000fffffc000-0000000fffffffff (prio 1, i/o): virtio-pci
0000000fffffc000-0000000fffffcfff (prio 0, i/o): virtio-pci-common
0000000fffffd000-0000000fffffdfff (prio 0, i/o): virtio-pci-isr
0000000fffffe000-0000000fffffefff (prio 0, i/o): virtio-pci-device
0000000ffffff000-0000000fffffffff (prio 0, i/o): virtio-pci-notify
memory-region: pci_bridge_pci
0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
memory-region: pci_bridge_pci
0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
memory-region: pci_bridge_pci
0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
Note: I've patched the upstream 6.1.0 qemu with following patch,
https://patchwork.kernel.org/project/qemu-devel/patch/20210916132838.3469580-3-ani@anisinha.ca/
the acpi-pci-hotplug memory is following as expected,
0000000000000cc0-0000000000000cd7 (prio 0, i/o): acpi-pci-hotplug
0000000000000cd8-0000000000000ce3 (prio 0, i/o): acpi-mem-hotplug
Thanks
Annie
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
2021-11-01 14:06 Failure of hot plugging secondary virtio_blk into q35 Windows 2019 Annie.li
@ 2021-11-08 18:53 ` Annie.li
2021-11-08 22:56 ` Annie.li
0 siblings, 1 reply; 11+ messages in thread
From: Annie.li @ 2021-11-08 18:53 UTC (permalink / raw)
To: qemu-devel, jusual; +Cc: ani, imammedo
Hi Julia
Not sure if you've noticed my previous email...
After switching from PCIe native hotplug to ACPI PCI hotplug(with
patches
https://lists.gnu.org/archive/html/qemu-devel/2021-07/msg03306.html),
I've run into the secondary virtio_blk hotplugging issue in Windows q35
guest.
Now, it seems Linux q35 guest also runs into issues when
virtio_blk/virtio_net device is hotplugged, both devices fail to get
proper BAR memory assigned. After hot-plugging virtio_blk device, dmesg
shows following error,
[ 111.131377] pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000
[ 111.131815] pci 0000:03:00.0: reg 0x14: [mem 0x00000000-0x00000fff]
[ 111.132206] pci 0000:03:00.0: reg 0x20: [mem 0x00000000-0x00003fff
64bit pref]
[ 111.135050] pci 0000:03:00.0: BAR 4: no space for [mem size
0x00004000 64bit pref]
[ 111.135053] pci 0000:03:00.0: BAR 4: failed to assign [mem size
0x00004000 64bit pref]
[ 111.135055] pci 0000:03:00.0: BAR 1: no space for [mem size 0x00001000]
[ 111.135056] pci 0000:03:00.0: BAR 1: failed to assign [mem size
0x00001000]
[ 111.136332] virtio-pci 0000:03:00.0: virtio_pci: leaving for legacy
driver
After hot-plugging virtio_nic device, dmesg shows following error.
[ 144.932161] pci 0000:04:00.0: [1af4:1041] type 00 class 0x020000
[ 144.932613] pci 0000:04:00.0: reg 0x14: [mem 0x00000000-0x00000fff]
[ 144.932999] pci 0000:04:00.0: reg 0x20: [mem 0x00000000-0x00003fff
64bit pref]
[ 144.933093] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref]
[ 144.935734] pci 0000:04:00.0: BAR 6: no space for [mem size
0x00040000 pref]
[ 144.935737] pci 0000:04:00.0: BAR 6: failed to assign [mem size
0x00040000 pref]
[ 144.935739] pci 0000:04:00.0: BAR 4: no space for [mem size
0x00004000 64bit pref]
[ 144.935741] pci 0000:04:00.0: BAR 4: failed to assign [mem size
0x00004000 64bit pref]
[ 144.935743] pci 0000:04:00.0: BAR 1: no space for [mem size 0x00001000]
[ 144.935744] pci 0000:04:00.0: BAR 1: failed to assign [mem size
0x00001000]
[ 144.937163] virtio-pci 0000:04:00.0: virtio_pci: leaving for legacy
driver
This error in Linux guest looks similar to the one I posted in Windows
guest earlier, there are memory conflicts among these hotplugged devices.
I am using Linux host version 5.15.0, the qemu version 6.1.0(patched
with
https://patchwork.kernel.org/project/qemu-devel/patch/20210916132838.3469580-3-ani@anisinha.ca/),
OVMF version stable202108.
If I switch from the ACPI PCI hotplug to PCIe native hotplug with
following option, these errors are gone.
-global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off
However, with PCIe native hotplug, we have been seeing other
hotplug/unplug failures, for example, deleted virtio disks still show in
q35 Windows guest.
I am wondering if anyone has seen this kind of errors with ACPI PCI
hotplug in q35 guests with latest QEMU version?
Thanks
Annie
On 11/1/2021 10:06 AM, Annie.li wrote:
> Hello,
>
> I've found an issue when hot-plugging the secondary virtio_blk device
> into q35 Windows guest(2019) with upstream qemu 6.1.0(+1 patch). The
> first disk can be hot-plugged successfully.
>
> The qemu options for PCIe root port is,
>
> -device
> pcie-root-port,port=2,chassis=2,id=pciroot2,bus=pcie.0,addr=0x2,multifunction=on
> \
> -device
> pcie-root-port,port=3,chassis=3,id=pciroot3,bus=pcie.0,addr=0x3,multifunction=on
> \
> -device
> pcie-root-port,port=4,chassis=4,id=pciroot4,bus=pcie.0,addr=0x4,multifunction=on
> \
> -device
> pcie-root-port,port=5,chassis=5,id=pciroot5,bus=pcie.0,addr=0x5,multifunction=on
> \
> -device
> pcie-root-port,port=6,chassis=6,id=pciroot6,bus=pcie.0,addr=0x6,multifunction=on
> \
>
> The command to hotplug 1st virtio_blk disk is following, the PCI slot
> of the 1st virtio_blk is Pci slot 0(PCI bus 1, device 0, function 0).
>
> drive_add auto
> file=block_10.qcow2,format=qcow2,if=none,id=drive10,cache=none
>
> device_add virtio-blk-pci,drive=drive10,id=block-disk10,bus=pciroot2
>
> Following is the related "info mtree" after the 1st virtio_blk device
> is hot plugged
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
> 00000000febff000-00000000febfffff (prio 1, i/o): virtio-blk-pci-msix
> 00000000febff000-00000000febff01f (prio 0, i/o): msix-table
> 00000000febff800-00000000febff807 (prio 0, i/o): msix-pba
> 0000000fffffc000-0000000fffffffff (prio 1, i/o): virtio-pci
> 0000000fffffc000-0000000fffffcfff (prio 0, i/o): virtio-pci-common
> 0000000fffffd000-0000000fffffdfff (prio 0, i/o): virtio-pci-isr
> 0000000fffffe000-0000000fffffefff (prio 0, i/o): virtio-pci-device
> 0000000ffffff000-0000000fffffffff (prio 0, i/o): virtio-pci-notify
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> Right after the secondary virtio_blk device is hot-plugged, a yellow
> mark shows on the first virtio_blk device in the Windows guest. The
> PCI slot info of the 2nd virtio_blk is Pci slot 0(PCI bus 2, device 0,
> function 0). The debug log of Windows virtio_blk driver shows a
> "ScsiStopAdapter" adapter control operation is triggered first, and
> then "StorSurpriseRemoval". From the following "info mtree", it seems
> the 2nd virtio_blk device is occupying the same memory resource as the
> above 1st virtio_blk device. Maybe this causes the failure of the 1st
> virtio_blk device and then the system assume it is surprisingly removed?
>
> The command to hotplug 2nd virtio_blk disk,
>
> drive_add auto
> file=block_11.qcow2,format=qcow2,if=none,id=drive11,cache=none
>
> device_add virtio-blk-pci,drive=drive11,id=block-disk11,bus=pciroot3
>
> Following is the related "info mtree" after the 2nd virtio_blk device
> is hot-plugged,
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
> 00000000febff000-00000000febfffff (prio 1, i/o): virtio-blk-pci-msix
> 00000000febff000-00000000febff01f (prio 0, i/o): msix-table
> 00000000febff800-00000000febff807 (prio 0, i/o): msix-pba
> 0000000fffffc000-0000000fffffffff (prio 1, i/o): virtio-pci
> 0000000fffffc000-0000000fffffcfff (prio 0, i/o): virtio-pci-common
> 0000000fffffd000-0000000fffffdfff (prio 0, i/o): virtio-pci-isr
> 0000000fffffe000-0000000fffffefff (prio 0, i/o): virtio-pci-device
> 0000000ffffff000-0000000fffffffff (prio 0, i/o): virtio-pci-notify
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
>
> Note: I've patched the upstream 6.1.0 qemu with following patch,
>
> https://patchwork.kernel.org/project/qemu-devel/patch/20210916132838.3469580-3-ani@anisinha.ca/
>
>
> the acpi-pci-hotplug memory is following as expected,
>
> 0000000000000cc0-0000000000000cd7 (prio 0, i/o): acpi-pci-hotplug
> 0000000000000cd8-0000000000000ce3 (prio 0, i/o): acpi-mem-hotplug
>
> Thanks
>
> Annie
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
2021-11-08 18:53 ` Annie.li
@ 2021-11-08 22:56 ` Annie.li
2021-11-09 7:11 ` Ani Sinha
0 siblings, 1 reply; 11+ messages in thread
From: Annie.li @ 2021-11-08 22:56 UTC (permalink / raw)
To: qemu-devel, jusual; +Cc: ani, imammedo
Update:
I've tested q35 guest w/o OVMF, the APCI PCI hot-plugging works well in
q35 guest. Seems this issue only happens in q35 guest w/ OVMF.
Looks that there is already a bug filed against this hotplug issue in
q35 guest w/ OVMF,
https://bugzilla.redhat.com/show_bug.cgi?id=2004829
In this bug, it is recommended to add "-global
ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off \" on qemu command for
6.1. However, with this option for 6.1(PCIe native hotplug), there still
are kinds of issues. For example, one of them is the deleted virtio_blk
device still shows in the Device Manager in Windows q35 guest, the
operation of re-scanning new hardware takes forever there. This means
both PCIe native hotplug and ACPI hotplug all have issues in q35 guests.
Per comments in this bug, changes in both OVMF and QEMU are necessary to
support ACPI hot plug in q35 guest. The fixes may likely be available in
QEMU 6.2.0. I am wondering if someone is already working on this?
Thanks
Annie
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
2021-11-08 22:56 ` Annie.li
@ 2021-11-09 7:11 ` Ani Sinha
2021-11-09 9:52 ` Daniel P. Berrangé
0 siblings, 1 reply; 11+ messages in thread
From: Ani Sinha @ 2021-11-09 7:11 UTC (permalink / raw)
To: Annie.li; +Cc: ani, imammedo, jusual, qemu-devel, kraxel
+gerd
On Mon, 8 Nov 2021, Annie.li wrote:
> Update:
>
> I've tested q35 guest w/o OVMF, the APCI PCI hot-plugging works well in q35
> guest. Seems this issue only happens in q35 guest w/ OVMF.
>
> Looks that there is already a bug filed against this hotplug issue in q35
> guest w/ OVMF,
>
> https://bugzilla.redhat.com/show_bug.cgi?id=2004829
>
> In this bug, it is recommended to add "-global
> ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off \" on qemu command for 6.1.
> However, with this option for 6.1(PCIe native hotplug), there still are kinds
> of issues. For example, one of them is the deleted virtio_blk device still
> shows in the Device Manager in Windows q35 guest, the operation of re-scanning
> new hardware takes forever there. This means both PCIe native hotplug and ACPI
> hotplug all have issues in q35 guests.
This is sad.
>
> Per comments in this bug, changes in both OVMF and QEMU are necessary to
> support ACPI hot plug in q35 guest. The fixes may likely be available in QEMU
> 6.2.0.
So we are in soft code freeze for 6.2:
https://wiki.qemu.org/Planning/6.2
I am curious about Gerd's comment #10:
"The 6.2 rebase should make hotplug work
again with the default configuration."
Sadly I have not seen any public discussion on what we want to do
for the issues with acpi hotplug for bridges in q35.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
2021-11-09 7:11 ` Ani Sinha
@ 2021-11-09 9:52 ` Daniel P. Berrangé
2021-11-09 11:10 ` Ani Sinha
0 siblings, 1 reply; 11+ messages in thread
From: Daniel P. Berrangé @ 2021-11-09 9:52 UTC (permalink / raw)
To: Ani Sinha; +Cc: imammedo, Annie.li, jusual, qemu-devel, kraxel
On Tue, Nov 09, 2021 at 12:41:39PM +0530, Ani Sinha wrote:
>
> +gerd
>
> On Mon, 8 Nov 2021, Annie.li wrote:
>
> > Update:
> >
> > I've tested q35 guest w/o OVMF, the APCI PCI hot-plugging works well in q35
> > guest. Seems this issue only happens in q35 guest w/ OVMF.
> >
> > Looks that there is already a bug filed against this hotplug issue in q35
> > guest w/ OVMF,
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=2004829
> >
> > In this bug, it is recommended to add "-global
> > ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off \" on qemu command for 6.1.
> > However, with this option for 6.1(PCIe native hotplug), there still are kinds
> > of issues. For example, one of them is the deleted virtio_blk device still
> > shows in the Device Manager in Windows q35 guest, the operation of re-scanning
> > new hardware takes forever there. This means both PCIe native hotplug and ACPI
> > hotplug all have issues in q35 guests.
>
> This is sad.
>
> >
> > Per comments in this bug, changes in both OVMF and QEMU are necessary to
> > support ACPI hot plug in q35 guest. The fixes may likely be available in QEMU
> > 6.2.0.
>
> So we are in soft code freeze for 6.2:
> https://wiki.qemu.org/Planning/6.2
>
> I am curious about Gerd's comment #10:
> "The 6.2 rebase should make hotplug work
> again with the default configuration."
>
> Sadly I have not seen any public discussion on what we want to do
> for the issues with acpi hotplug for bridges in q35.
I've raised one of the problems a week ago and there's a promised
fix
https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00558.html
but we're now a week after freeze and still no patch has been posted
AFAIK.
IMHO it is pretty much time to revert to native hotplug, otherwise
we're going to risk getting too late into freeze to do anything, and
end up shipping with broken PCI hotplug again in 6.2
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
2021-11-09 9:52 ` Daniel P. Berrangé
@ 2021-11-09 11:10 ` Ani Sinha
2021-11-09 11:19 ` Daniel P. Berrangé
0 siblings, 1 reply; 11+ messages in thread
From: Ani Sinha @ 2021-11-09 11:10 UTC (permalink / raw)
To: Daniel P. Berrangé; +Cc: imammedo, Annie.li, jusual, qemu-devel, kraxel
On Tue, Nov 9, 2021 at 3:23 PM Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> On Tue, Nov 09, 2021 at 12:41:39PM +0530, Ani Sinha wrote:
> >
> > +gerd
> >
> > On Mon, 8 Nov 2021, Annie.li wrote:
> >
> > > Update:
> > >
> > > I've tested q35 guest w/o OVMF, the APCI PCI hot-plugging works well in q35
> > > guest. Seems this issue only happens in q35 guest w/ OVMF.
> > >
> > > Looks that there is already a bug filed against this hotplug issue in q35
> > > guest w/ OVMF,
> > >
> > > https://bugzilla.redhat.com/show_bug.cgi?id=2004829
> > >
> > > In this bug, it is recommended to add "-global
> > > ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off \" on qemu command for 6.1.
> > > However, with this option for 6.1(PCIe native hotplug), there still are kinds
> > > of issues. For example, one of them is the deleted virtio_blk device still
> > > shows in the Device Manager in Windows q35 guest, the operation of re-scanning
> > > new hardware takes forever there. This means both PCIe native hotplug and ACPI
> > > hotplug all have issues in q35 guests.
> >
> > This is sad.
> >
> > >
> > > Per comments in this bug, changes in both OVMF and QEMU are necessary to
> > > support ACPI hot plug in q35 guest. The fixes may likely be available in QEMU
> > > 6.2.0.
> >
> > So we are in soft code freeze for 6.2:
> > https://wiki.qemu.org/Planning/6.2
> >
> > I am curious about Gerd's comment #10:
> > "The 6.2 rebase should make hotplug work
> > again with the default configuration."
> >
> > Sadly I have not seen any public discussion on what we want to do
> > for the issues with acpi hotplug for bridges in q35.
>
> I've raised one of the problems a week ago and there's a promised
> fix
>
> https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00558.html
So https://gitlab.com/qemu-project/qemu/-/issues/641 is the same as
https://bugzilla.redhat.com/show_bug.cgi?id=2006409
isn't it?
>
> but we're now a week after freeze and still no patch has been posted
> AFAIK.
>
> IMHO it is pretty much time to revert to native hotplug, otherwise
> we're going to risk getting too late into freeze to do anything, and
> end up shipping with broken PCI hotplug again in 6.2
>
> Regards,
> Daniel
> --
> |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o- https://fstop138.berrange.com :|
> |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
2021-11-09 11:10 ` Ani Sinha
@ 2021-11-09 11:19 ` Daniel P. Berrangé
2021-11-09 17:01 ` Annie.li
0 siblings, 1 reply; 11+ messages in thread
From: Daniel P. Berrangé @ 2021-11-09 11:19 UTC (permalink / raw)
To: Ani Sinha; +Cc: imammedo, Annie.li, jusual, qemu-devel, kraxel
On Tue, Nov 09, 2021 at 04:40:10PM +0530, Ani Sinha wrote:
> On Tue, Nov 9, 2021 at 3:23 PM Daniel P. Berrangé <berrange@redhat.com> wrote:
> >
> > On Tue, Nov 09, 2021 at 12:41:39PM +0530, Ani Sinha wrote:
> > >
> > > +gerd
> > >
> > > On Mon, 8 Nov 2021, Annie.li wrote:
> > >
> > > > Update:
> > > >
> > > > I've tested q35 guest w/o OVMF, the APCI PCI hot-plugging works well in q35
> > > > guest. Seems this issue only happens in q35 guest w/ OVMF.
> > > >
> > > > Looks that there is already a bug filed against this hotplug issue in q35
> > > > guest w/ OVMF,
> > > >
> > > > https://bugzilla.redhat.com/show_bug.cgi?id=2004829
> > > >
> > > > In this bug, it is recommended to add "-global
> > > > ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off \" on qemu command for 6.1.
> > > > However, with this option for 6.1(PCIe native hotplug), there still are kinds
> > > > of issues. For example, one of them is the deleted virtio_blk device still
> > > > shows in the Device Manager in Windows q35 guest, the operation of re-scanning
> > > > new hardware takes forever there. This means both PCIe native hotplug and ACPI
> > > > hotplug all have issues in q35 guests.
> > >
> > > This is sad.
> > >
> > > >
> > > > Per comments in this bug, changes in both OVMF and QEMU are necessary to
> > > > support ACPI hot plug in q35 guest. The fixes may likely be available in QEMU
> > > > 6.2.0.
> > >
> > > So we are in soft code freeze for 6.2:
> > > https://wiki.qemu.org/Planning/6.2
> > >
> > > I am curious about Gerd's comment #10:
> > > "The 6.2 rebase should make hotplug work
> > > again with the default configuration."
> > >
> > > Sadly I have not seen any public discussion on what we want to do
> > > for the issues with acpi hotplug for bridges in q35.
> >
> > I've raised one of the problems a week ago and there's a promised
> > fix
> >
> > https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00558.html
>
> So https://gitlab.com/qemu-project/qemu/-/issues/641 is the same as
> https://bugzilla.redhat.com/show_bug.cgi?id=2006409
>
> isn't it?
Yes, one upstream, one downstream.
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
2021-11-09 11:19 ` Daniel P. Berrangé
@ 2021-11-09 17:01 ` Annie.li
2021-11-09 18:32 ` Daniel P. Berrangé
0 siblings, 1 reply; 11+ messages in thread
From: Annie.li @ 2021-11-09 17:01 UTC (permalink / raw)
To: Daniel P. Berrangé, Ani Sinha; +Cc: imammedo, jusual, qemu-devel, kraxel
On 11/9/2021 6:19 AM, Daniel P. Berrangé wrote:
> On Tue, Nov 09, 2021 at 04:40:10PM +0530, Ani Sinha wrote:
>> On Tue, Nov 9, 2021 at 3:23 PM Daniel P. Berrangé <berrange@redhat.com> wrote:
>>> On Tue, Nov 09, 2021 at 12:41:39PM +0530, Ani Sinha wrote:
>>>> +gerd
>>>>
>>>> On Mon, 8 Nov 2021, Annie.li wrote:
>>>>
>>>>> Update:
>>>>>
>>>>> I've tested q35 guest w/o OVMF, the APCI PCI hot-plugging works well in q35
>>>>> guest. Seems this issue only happens in q35 guest w/ OVMF.
>>>>>
>>>>> Looks that there is already a bug filed against this hotplug issue in q35
>>>>> guest w/ OVMF,
>>>>>
>>>>> https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=2004829__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ8E_c_jRg$
>>>>>
>>>>> In this bug, it is recommended to add "-global
>>>>> ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off \" on qemu command for 6.1.
>>>>> However, with this option for 6.1(PCIe native hotplug), there still are kinds
>>>>> of issues. For example, one of them is the deleted virtio_blk device still
>>>>> shows in the Device Manager in Windows q35 guest, the operation of re-scanning
>>>>> new hardware takes forever there. This means both PCIe native hotplug and ACPI
>>>>> hotplug all have issues in q35 guests.
>>>> This is sad.
>>>>
>>>>> Per comments in this bug, changes in both OVMF and QEMU are necessary to
>>>>> support ACPI hot plug in q35 guest. The fixes may likely be available in QEMU
>>>>> 6.2.0.
>>>> So we are in soft code freeze for 6.2:
>>>> https://urldefense.com/v3/__https://wiki.qemu.org/Planning/6.2__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ_pKO8AzA$
>>>>
>>>> I am curious about Gerd's comment #10:
>>>> "The 6.2 rebase should make hotplug work
>>>> again with the default configuration."
>>>>
>>>> Sadly I have not seen any public discussion on what we want to do
>>>> for the issues with acpi hotplug for bridges in q35.
>>> I've raised one of the problems a week ago and there's a promised
>>> fix
>>>
>>> https://urldefense.com/v3/__https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00558.html__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ-np8GcUA$
>> So https://urldefense.com/v3/__https://gitlab.com/qemu-project/qemu/-/issues/641__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ86xk2gtg$ is the same as
>> https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=2006409__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ9crT9JKw$
>>
>> isn't it?
> Yes, one upstream, one downstream.
Thanks for the info.
So q35 guests with either OVMF or SeaBIOS have different ACPI hotplug
issues in QEMU 6.1.
As Ani mentioned earlier, QEMU 6.2 is in soft code freeze.
Today(Nov 9) is the date of hard feature freeze.
I suppose this means the fix for the issue with SeaBIOS or the feature
to cooperate
with the coming change in OVMF won't happen in 6.2?
Thanks
Annie
>
> Regards,
> Daniel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
2021-11-09 17:01 ` Annie.li
@ 2021-11-09 18:32 ` Daniel P. Berrangé
2021-11-10 18:12 ` Annie.li
0 siblings, 1 reply; 11+ messages in thread
From: Daniel P. Berrangé @ 2021-11-09 18:32 UTC (permalink / raw)
To: Annie.li; +Cc: Ani Sinha, imammedo, jusual, qemu-devel, kraxel
On Tue, Nov 09, 2021 at 12:01:30PM -0500, Annie.li wrote:
> On 11/9/2021 6:19 AM, Daniel P. Berrangé wrote:
> > On Tue, Nov 09, 2021 at 04:40:10PM +0530, Ani Sinha wrote:
> > > On Tue, Nov 9, 2021 at 3:23 PM Daniel P. Berrangé <berrange@redhat.com> wrote:
> > > > On Tue, Nov 09, 2021 at 12:41:39PM +0530, Ani Sinha wrote:
> > > > > +gerd
> > > > >
> > > > > On Mon, 8 Nov 2021, Annie.li wrote:
> > > > >
> > > > > > Update:
> > > > > >
> > > > > > I've tested q35 guest w/o OVMF, the APCI PCI hot-plugging works well in q35
> > > > > > guest. Seems this issue only happens in q35 guest w/ OVMF.
> > > > > >
> > > > > > Looks that there is already a bug filed against this hotplug issue in q35
> > > > > > guest w/ OVMF,
> > > > > >
> > > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=2004829__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ8E_c_jRg$
> > > > > >
> > > > > > In this bug, it is recommended to add "-global
> > > > > > ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off \" on qemu command for 6.1.
> > > > > > However, with this option for 6.1(PCIe native hotplug), there still are kinds
> > > > > > of issues. For example, one of them is the deleted virtio_blk device still
> > > > > > shows in the Device Manager in Windows q35 guest, the operation of re-scanning
> > > > > > new hardware takes forever there. This means both PCIe native hotplug and ACPI
> > > > > > hotplug all have issues in q35 guests.
> > > > > This is sad.
> > > > >
> > > > > > Per comments in this bug, changes in both OVMF and QEMU are necessary to
> > > > > > support ACPI hot plug in q35 guest. The fixes may likely be available in QEMU
> > > > > > 6.2.0.
> > > > > So we are in soft code freeze for 6.2:
> > > > > https://urldefense.com/v3/__https://wiki.qemu.org/Planning/6.2__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ_pKO8AzA$
> > > > >
> > > > > I am curious about Gerd's comment #10:
> > > > > "The 6.2 rebase should make hotplug work
> > > > > again with the default configuration."
> > > > >
> > > > > Sadly I have not seen any public discussion on what we want to do
> > > > > for the issues with acpi hotplug for bridges in q35.
> > > > I've raised one of the problems a week ago and there's a promised
> > > > fix
> > > >
> > > > https://urldefense.com/v3/__https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00558.html__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ-np8GcUA$
> > > So https://urldefense.com/v3/__https://gitlab.com/qemu-project/qemu/-/issues/641__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ86xk2gtg$ is the same as
> > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=2006409__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ9crT9JKw$
> > >
> > > isn't it?
> > Yes, one upstream, one downstream.
>
> Thanks for the info.
>
> So q35 guests with either OVMF or SeaBIOS have different ACPI hotplug issues
> in QEMU 6.1.
>
> As Ani mentioned earlier, QEMU 6.2 is in soft code freeze.
> Today(Nov 9) is the date of hard feature freeze.
>
> I suppose this means the fix for the issue with SeaBIOS or the feature to
> cooperate
> with the coming change in OVMF won't happen in 6.2?
Patches are allowed if they're bug fixes. If a change requires coordination
with an OVMF change too though, I think that's going to be difficult to
justify.
Our fallback option is to revert to native hotplug out of the box for
QEMU machine types in 6.2
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
2021-11-09 18:32 ` Daniel P. Berrangé
@ 2021-11-10 18:12 ` Annie.li
2021-11-19 21:29 ` Annie.li
0 siblings, 1 reply; 11+ messages in thread
From: Annie.li @ 2021-11-10 18:12 UTC (permalink / raw)
To: Daniel P. Berrangé; +Cc: Ani Sinha, imammedo, jusual, qemu-devel, kraxel
On 11/9/2021 1:32 PM, Daniel P. Berrangé wrote:
> On Tue, Nov 09, 2021 at 12:01:30PM -0500, Annie.li wrote:
>> On 11/9/2021 6:19 AM, Daniel P. Berrangé wrote:
>>> On Tue, Nov 09, 2021 at 04:40:10PM +0530, Ani Sinha wrote:
>>>> On Tue, Nov 9, 2021 at 3:23 PM Daniel P. Berrangé <berrange@redhat.com> wrote:
>>>>> On Tue, Nov 09, 2021 at 12:41:39PM +0530, Ani Sinha wrote:
>>>>>> +gerd
>>>>>>
>>>>>> On Mon, 8 Nov 2021, Annie.li wrote:
>>>>>>
>>>>>>> Update:
>>>>>>>
>>>>>>> I've tested q35 guest w/o OVMF, the APCI PCI hot-plugging works well in q35
>>>>>>> guest. Seems this issue only happens in q35 guest w/ OVMF.
>>>>>>>
>>>>>>> Looks that there is already a bug filed against this hotplug issue in q35
>>>>>>> guest w/ OVMF,
>>>>>>>
>>>>>>> https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=2004829__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ8E_c_jRg$
>>>>>>>
>>>>>>> In this bug, it is recommended to add "-global
>>>>>>> ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off \" on qemu command for 6.1.
>>>>>>> However, with this option for 6.1(PCIe native hotplug), there still are kinds
>>>>>>> of issues. For example, one of them is the deleted virtio_blk device still
>>>>>>> shows in the Device Manager in Windows q35 guest, the operation of re-scanning
>>>>>>> new hardware takes forever there. This means both PCIe native hotplug and ACPI
>>>>>>> hotplug all have issues in q35 guests.
>>>>>> This is sad.
>>>>>>
>>>>>>> Per comments in this bug, changes in both OVMF and QEMU are necessary to
>>>>>>> support ACPI hot plug in q35 guest. The fixes may likely be available in QEMU
>>>>>>> 6.2.0.
>>>>>> So we are in soft code freeze for 6.2:
>>>>>> https://urldefense.com/v3/__https://wiki.qemu.org/Planning/6.2__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ_pKO8AzA$
>>>>>>
>>>>>> I am curious about Gerd's comment #10:
>>>>>> "The 6.2 rebase should make hotplug work
>>>>>> again with the default configuration."
>>>>>>
>>>>>> Sadly I have not seen any public discussion on what we want to do
>>>>>> for the issues with acpi hotplug for bridges in q35.
>>>>> I've raised one of the problems a week ago and there's a promised
>>>>> fix
>>>>>
>>>>> https://urldefense.com/v3/__https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00558.html__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ-np8GcUA$
>>>> So https://urldefense.com/v3/__https://gitlab.com/qemu-project/qemu/-/issues/641__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ86xk2gtg$ is the same as
>>>> https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=2006409__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ9crT9JKw$
>>>>
>>>> isn't it?
>>> Yes, one upstream, one downstream.
>> Thanks for the info.
>>
>> So q35 guests with either OVMF or SeaBIOS have different ACPI hotplug issues
>> in QEMU 6.1.
>>
>> As Ani mentioned earlier, QEMU 6.2 is in soft code freeze.
>> Today(Nov 9) is the date of hard feature freeze.
>>
>> I suppose this means the fix for the issue with SeaBIOS or the feature to
>> cooperate
>> with the coming change in OVMF won't happen in 6.2?
> Patches are allowed if they're bug fixes. If a change requires coordination
> with an OVMF change too though, I think that's going to be difficult to
> justify.
>
> Our fallback option is to revert to native hotplug out of the box for
> QEMU machine types in 6.2
Nod.
Just make sure we are all on the same page...
It seems that reverting back to PCIe native hotplug for 6.2 is being
discussed
in bug2004829(https://bugzilla.redhat.com/show_bug.cgi?id=2004829).
If these PCIe native patches posted by Gerd Hoffmann in bug 2007129 can
fix the various PCIe hotplug/unplug issues, that will be great!
At least, we have been seeing PCIe virtio-blk/virtio-nic unplugging issues,
PCIe VFIO hotplugging issue with PCIe native hotplug. Anyway, I'll run tests
a bit with these patches from my side.
Thanks
Annie
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
2021-11-10 18:12 ` Annie.li
@ 2021-11-19 21:29 ` Annie.li
0 siblings, 0 replies; 11+ messages in thread
From: Annie.li @ 2021-11-19 21:29 UTC (permalink / raw)
To: kraxel; +Cc: Ani Sinha, imammedo, Daniel P. Berrangé, jusual, qemu-devel
On 11/10/2021 1:12 PM, Annie.li wrote:
>
> On 11/9/2021 1:32 PM, Daniel P. Berrangé wrote:
>> On Tue, Nov 09, 2021 at 12:01:30PM -0500, Annie.li wrote:
>>> On 11/9/2021 6:19 AM, Daniel P. Berrangé wrote:
>>>> On Tue, Nov 09, 2021 at 04:40:10PM +0530, Ani Sinha wrote:
>>>>> On Tue, Nov 9, 2021 at 3:23 PM Daniel P. Berrangé
>>>>> <berrange@redhat.com> wrote:
>>>>>> On Tue, Nov 09, 2021 at 12:41:39PM +0530, Ani Sinha wrote:
>>>>>>> +gerd
>>>>>>>
>>>>>>> On Mon, 8 Nov 2021, Annie.li wrote:
>>>>>>>
>>>>>>>> Update:
>>>>>>>>
>>>>>>>> I've tested q35 guest w/o OVMF, the APCI PCI hot-plugging works
>>>>>>>> well in q35
>>>>>>>> guest. Seems this issue only happens in q35 guest w/ OVMF.
>>>>>>>>
>>>>>>>> Looks that there is already a bug filed against this hotplug
>>>>>>>> issue in q35
>>>>>>>> guest w/ OVMF,
>>>>>>>>
>>>>>>>> https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=2004829__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ8E_c_jRg$
>>>>>>>>
>>>>>>>>
>>>>>>>> In this bug, it is recommended to add "-global
>>>>>>>> ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off \" on qemu
>>>>>>>> command for 6.1.
>>>>>>>> However, with this option for 6.1(PCIe native hotplug), there
>>>>>>>> still are kinds
>>>>>>>> of issues. For example, one of them is the deleted virtio_blk
>>>>>>>> device still
>>>>>>>> shows in the Device Manager in Windows q35 guest, the operation
>>>>>>>> of re-scanning
>>>>>>>> new hardware takes forever there. This means both PCIe native
>>>>>>>> hotplug and ACPI
>>>>>>>> hotplug all have issues in q35 guests.
>>>>>>> This is sad.
>>>>>>>
>>>>>>>> Per comments in this bug, changes in both OVMF and QEMU are
>>>>>>>> necessary to
>>>>>>>> support ACPI hot plug in q35 guest. The fixes may likely be
>>>>>>>> available in QEMU
>>>>>>>> 6.2.0.
>>>>>>> So we are in soft code freeze for 6.2:
>>>>>>> https://urldefense.com/v3/__https://wiki.qemu.org/Planning/6.2__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ_pKO8AzA$
>>>>>>>
>>>>>>>
>>>>>>> I am curious about Gerd's comment #10:
>>>>>>> "The 6.2 rebase should make hotplug work
>>>>>>> again with the default configuration."
>>>>>>>
>>>>>>> Sadly I have not seen any public discussion on what we want to do
>>>>>>> for the issues with acpi hotplug for bridges in q35.
>>>>>> I've raised one of the problems a week ago and there's a promised
>>>>>> fix
>>>>>>
>>>>>> https://urldefense.com/v3/__https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00558.html__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ-np8GcUA$
>>>>> So
>>>>> https://urldefense.com/v3/__https://gitlab.com/qemu-project/qemu/-/issues/641__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ86xk2gtg$
>>>>> is the same as
>>>>> https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=2006409__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ9crT9JKw$
>>>>>
>>>>>
>>>>> isn't it?
>>>> Yes, one upstream, one downstream.
>>> Thanks for the info.
>>>
>>> So q35 guests with either OVMF or SeaBIOS have different ACPI
>>> hotplug issues
>>> in QEMU 6.1.
>>>
>>> As Ani mentioned earlier, QEMU 6.2 is in soft code freeze.
>>> Today(Nov 9) is the date of hard feature freeze.
>>>
>>> I suppose this means the fix for the issue with SeaBIOS or the
>>> feature to
>>> cooperate
>>> with the coming change in OVMF won't happen in 6.2?
>> Patches are allowed if they're bug fixes. If a change requires
>> coordination
>> with an OVMF change too though, I think that's going to be difficult to
>> justify.
>>
>> Our fallback option is to revert to native hotplug out of the box for
>> QEMU machine types in 6.2
>
> Nod.
>
> Just make sure we are all on the same page...
>
> It seems that reverting back to PCIe native hotplug for 6.2 is being
> discussed
> in bug2004829(https://bugzilla.redhat.com/show_bug.cgi?id=2004829).
>
> If these PCIe native patches posted by Gerd Hoffmann in bug 2007129 can
> fix the various PCIe hotplug/unplug issues, that will be great!
>
> At least, we have been seeing PCIe virtio-blk/virtio-nic unplugging
> issues,
> PCIe VFIO hotplugging issue with PCIe native hotplug. Anyway, I'll run
> tests
> a bit with these patches from my side.
With these patches of improving native hotplug for pcie root ports, the
pcie unplugging issues are gone from my side.
Seems these patches have been upstreamed already. Thank you, Gerd.
Thanks
Annie
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2021-11-19 21:31 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-01 14:06 Failure of hot plugging secondary virtio_blk into q35 Windows 2019 Annie.li
2021-11-08 18:53 ` Annie.li
2021-11-08 22:56 ` Annie.li
2021-11-09 7:11 ` Ani Sinha
2021-11-09 9:52 ` Daniel P. Berrangé
2021-11-09 11:10 ` Ani Sinha
2021-11-09 11:19 ` Daniel P. Berrangé
2021-11-09 17:01 ` Annie.li
2021-11-09 18:32 ` Daniel P. Berrangé
2021-11-10 18:12 ` Annie.li
2021-11-19 21:29 ` Annie.li
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.