* [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
@ 2023-11-03 10:56 Dmitrii Gavrilov
2023-11-07 7:28 ` Vladimir Sementsov-Ogievskiy
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: Dmitrii Gavrilov @ 2023-11-03 10:56 UTC (permalink / raw)
To: qemu-devel
Cc: pbonzini, berrange, eduardo, mlevitsk, vsementsov, ds-gavr, yc-core
Original goal of addition of drain_call_rcu to qmp_device_add was to cover
the failure case of qdev_device_add. It seems call of drain_call_rcu was
misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
under happy path too. What led to overall performance degradation of
qmp_device_add.
In this patch call of drain_call_rcu moved under handling of failure of
qdev_device_add.
Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
---
system/qdev-monitor.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
index 1b8005a..dc7b02d 100644
--- a/system/qdev-monitor.c
+++ b/system/qdev-monitor.c
@@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
return;
}
dev = qdev_device_add(opts, errp);
-
- /*
- * Drain all pending RCU callbacks. This is done because
- * some bus related operations can delay a device removal
- * (in this case this can happen if device is added and then
- * removed due to a configuration error)
- * to a RCU callback, but user might expect that this interface
- * will finish its job completely once qmp command returns result
- * to the user
- */
- drain_call_rcu();
-
if (!dev) {
+ /*
+ * Drain all pending RCU callbacks. This is done because
+ * some bus related operations can delay a device removal
+ * (in this case this can happen if device is added and then
+ * removed due to a configuration error)
+ * to a RCU callback, but user might expect that this interface
+ * will finish its job completely once qmp command returns result
+ * to the user
+ */
+ drain_call_rcu();
+
qemu_opts_del(opts);
return;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
2023-11-03 10:56 [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add() Dmitrii Gavrilov
@ 2023-11-07 7:28 ` Vladimir Sementsov-Ogievskiy
2023-11-07 7:32 ` Michael S. Tsirkin
` (4 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2023-11-07 7:28 UTC (permalink / raw)
To: Dmitrii Gavrilov, qemu-devel
Cc: pbonzini, berrange, eduardo, mlevitsk, yc-core, Michael S. Tsirkin
[add Michael]
On 03.11.23 13:56, Dmitrii Gavrilov wrote:
> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
> the failure case of qdev_device_add. It seems call of drain_call_rcu was
> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
> under happy path too. What led to overall performance degradation of
> qmp_device_add.
>
> In this patch call of drain_call_rcu moved under handling of failure of
> qdev_device_add.
>
> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
> ---
> system/qdev-monitor.c | 23 +++++++++++------------
> 1 file changed, 11 insertions(+), 12 deletions(-)
>
> diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
> index 1b8005a..dc7b02d 100644
> --- a/system/qdev-monitor.c
> +++ b/system/qdev-monitor.c
> @@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
> return;
> }
> dev = qdev_device_add(opts, errp);
> -
> - /*
> - * Drain all pending RCU callbacks. This is done because
> - * some bus related operations can delay a device removal
> - * (in this case this can happen if device is added and then
> - * removed due to a configuration error)
> - * to a RCU callback, but user might expect that this interface
> - * will finish its job completely once qmp command returns result
> - * to the user
> - */
> - drain_call_rcu();
> -
> if (!dev) {
> + /*
> + * Drain all pending RCU callbacks. This is done because
> + * some bus related operations can delay a device removal
> + * (in this case this can happen if device is added and then
> + * removed due to a configuration error)
> + * to a RCU callback, but user might expect that this interface
> + * will finish its job completely once qmp command returns result
> + * to the user
> + */
> + drain_call_rcu();
> +
> qemu_opts_del(opts);
> return;
> }
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
2023-11-03 10:56 [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add() Dmitrii Gavrilov
2023-11-07 7:28 ` Vladimir Sementsov-Ogievskiy
@ 2023-11-07 7:32 ` Michael S. Tsirkin
2023-11-07 9:01 ` Vladimir Sementsov-Ogievskiy
2024-02-28 17:12 ` Vladimir Sementsov-Ogievskiy
` (3 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Michael S. Tsirkin @ 2023-11-07 7:32 UTC (permalink / raw)
To: Dmitrii Gavrilov
Cc: qemu-devel, pbonzini, berrange, eduardo, mlevitsk, vsementsov, yc-core
On Fri, Nov 03, 2023 at 01:56:02PM +0300, Dmitrii Gavrilov wrote:
> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
> the failure case of qdev_device_add. It seems call of drain_call_rcu was
> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
> under happy path too. What led to overall performance degradation of
> qmp_device_add.
>
> In this patch call of drain_call_rcu moved under handling of failure of
> qdev_device_add.
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Also:
Fixes: 7bed89958b ("device_core: use drain_call_rcu in in qmp_device_add")
Cc: "Maxim Levitsky" <mlevitsk@redhat.com>
>
> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
> ---
> system/qdev-monitor.c | 23 +++++++++++------------
> 1 file changed, 11 insertions(+), 12 deletions(-)
>
> diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
> index 1b8005a..dc7b02d 100644
> --- a/system/qdev-monitor.c
> +++ b/system/qdev-monitor.c
> @@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
> return;
> }
> dev = qdev_device_add(opts, errp);
> -
> - /*
> - * Drain all pending RCU callbacks. This is done because
> - * some bus related operations can delay a device removal
> - * (in this case this can happen if device is added and then
> - * removed due to a configuration error)
> - * to a RCU callback, but user might expect that this interface
> - * will finish its job completely once qmp command returns result
> - * to the user
> - */
> - drain_call_rcu();
> -
> if (!dev) {
> + /*
> + * Drain all pending RCU callbacks. This is done because
> + * some bus related operations can delay a device removal
> + * (in this case this can happen if device is added and then
> + * removed due to a configuration error)
> + * to a RCU callback, but user might expect that this interface
> + * will finish its job completely once qmp command returns result
> + * to the user
> + */
> + drain_call_rcu();
> +
> qemu_opts_del(opts);
> return;
> }
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
2023-11-07 7:32 ` Michael S. Tsirkin
@ 2023-11-07 9:01 ` Vladimir Sementsov-Ogievskiy
0 siblings, 0 replies; 11+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2023-11-07 9:01 UTC (permalink / raw)
To: Michael S. Tsirkin, Dmitrii Gavrilov
Cc: qemu-devel, pbonzini, berrange, eduardo, mlevitsk, yc-core
On 07.11.23 10:32, Michael S. Tsirkin wrote:
> On Fri, Nov 03, 2023 at 01:56:02PM +0300, Dmitrii Gavrilov wrote:
>> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
>> the failure case of qdev_device_add. It seems call of drain_call_rcu was
>> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
>> under happy path too. What led to overall performance degradation of
>> qmp_device_add.
>>
>> In this patch call of drain_call_rcu moved under handling of failure of
>> qdev_device_add.
>
>
> Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Right, sorry for missing that
> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Thanks!
>
> Also:
>
> Fixes: 7bed89958b ("device_core: use drain_call_rcu in in qmp_device_add")
> Cc: "Maxim Levitsky" <mlevitsk@redhat.com>
>
>
>>
>> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
>> ---
>> system/qdev-monitor.c | 23 +++++++++++------------
>> 1 file changed, 11 insertions(+), 12 deletions(-)
>>
>> diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
>> index 1b8005a..dc7b02d 100644
>> --- a/system/qdev-monitor.c
>> +++ b/system/qdev-monitor.c
>> @@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
>> return;
>> }
>> dev = qdev_device_add(opts, errp);
>> -
>> - /*
>> - * Drain all pending RCU callbacks. This is done because
>> - * some bus related operations can delay a device removal
>> - * (in this case this can happen if device is added and then
>> - * removed due to a configuration error)
>> - * to a RCU callback, but user might expect that this interface
>> - * will finish its job completely once qmp command returns result
>> - * to the user
>> - */
>> - drain_call_rcu();
>> -
>> if (!dev) {
>> + /*
>> + * Drain all pending RCU callbacks. This is done because
>> + * some bus related operations can delay a device removal
>> + * (in this case this can happen if device is added and then
>> + * removed due to a configuration error)
>> + * to a RCU callback, but user might expect that this interface
>> + * will finish its job completely once qmp command returns result
>> + * to the user
>> + */
>> + drain_call_rcu();
>> +
>> qemu_opts_del(opts);
>> return;
>> }
>> --
>> 2.34.1
>>
>>
>
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
2023-11-03 10:56 [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add() Dmitrii Gavrilov
2023-11-07 7:28 ` Vladimir Sementsov-Ogievskiy
2023-11-07 7:32 ` Michael S. Tsirkin
@ 2024-02-28 17:12 ` Vladimir Sementsov-Ogievskiy
2024-02-29 21:24 ` Paolo Bonzini
` (2 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2024-02-28 17:12 UTC (permalink / raw)
To: Dmitrii Gavrilov, qemu-devel, Paolo Bonzini
Cc: pbonzini, berrange, eduardo, mlevitsk, yc-core
ping.
Hi again!
Paolo could you please take a look? Could we merge that? It still applies to master branch.
On 03.11.23 13:56, Dmitrii Gavrilov wrote:
> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
> the failure case of qdev_device_add. It seems call of drain_call_rcu was
> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
> under happy path too. What led to overall performance degradation of
> qmp_device_add.
>
> In this patch call of drain_call_rcu moved under handling of failure of
> qdev_device_add.
>
> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
> ---
> system/qdev-monitor.c | 23 +++++++++++------------
> 1 file changed, 11 insertions(+), 12 deletions(-)
>
> diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
> index 1b8005a..dc7b02d 100644
> --- a/system/qdev-monitor.c
> +++ b/system/qdev-monitor.c
> @@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
> return;
> }
> dev = qdev_device_add(opts, errp);
> -
> - /*
> - * Drain all pending RCU callbacks. This is done because
> - * some bus related operations can delay a device removal
> - * (in this case this can happen if device is added and then
> - * removed due to a configuration error)
> - * to a RCU callback, but user might expect that this interface
> - * will finish its job completely once qmp command returns result
> - * to the user
> - */
> - drain_call_rcu();
> -
> if (!dev) {
> + /*
> + * Drain all pending RCU callbacks. This is done because
> + * some bus related operations can delay a device removal
> + * (in this case this can happen if device is added and then
> + * removed due to a configuration error)
> + * to a RCU callback, but user might expect that this interface
> + * will finish its job completely once qmp command returns result
> + * to the user
> + */
> + drain_call_rcu();
> +
> qemu_opts_del(opts);
> return;
> }
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
2023-11-03 10:56 [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add() Dmitrii Gavrilov
` (2 preceding siblings ...)
2024-02-28 17:12 ` Vladimir Sementsov-Ogievskiy
@ 2024-02-29 21:24 ` Paolo Bonzini
2024-04-26 8:16 ` Marc Hartmayer
2024-04-30 14:27 ` Igor Mammedov
5 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2024-02-29 21:24 UTC (permalink / raw)
To: Dmitrii Gavrilov
Cc: qemu-devel, pbonzini, berrange, eduardo, mlevitsk, vsementsov, yc-core
Queued, thanks.
Paolo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
2023-11-03 10:56 [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add() Dmitrii Gavrilov
` (3 preceding siblings ...)
2024-02-29 21:24 ` Paolo Bonzini
@ 2024-04-26 8:16 ` Marc Hartmayer
2024-04-26 8:57 ` Dmitrii Gavrilov
2024-04-30 14:27 ` Igor Mammedov
5 siblings, 1 reply; 11+ messages in thread
From: Marc Hartmayer @ 2024-04-26 8:16 UTC (permalink / raw)
To: Dmitrii Gavrilov, qemu-devel
Cc: pbonzini, berrange, eduardo, mlevitsk, vsementsov, ds-gavr,
yc-core, Christian Borntraeger, Benjamin Block
On Fri, Nov 03, 2023 at 01:56 PM +0300, Dmitrii Gavrilov <ds-gavr@yandex-team.ru> wrote:
> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
> the failure case of qdev_device_add. It seems call of drain_call_rcu was
> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
> under happy path too. What led to overall performance degradation of
> qmp_device_add.
>
> In this patch call of drain_call_rcu moved under handling of failure of
> qdev_device_add.
>
> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
I don't know the exact reason, but this commit caused udev events to
show up much slower than before (~3s vs. ~23s) when a virtio-scsi device
is hotplugged (I’ve tested this only on s390x). Importantly, this only
happens when asynchronous SCSI scanning is disabled in the *guest*
kernel (scsi_mod.scan=sync or CONFIG_SCSI_SCAN_ASYNC=n).
The `udevadm monitor` output captured while hotplugging the device
(using QEMU 012b170173bc ("system/qdev-monitor: move drain_call_rcu call
under if (!dev) in qmp_device_add()")):
…
KERNEL[2.166575] add /devices/css0/0.0.0002/0.0.0002 (ccw)
KERNEL[2.166594] bind /devices/css0/0.0.0002/0.0.0002 (ccw)
KERNEL[2.166826] add /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
UDEV [2.166846] add /devices/css0/0.0.0002/0.0.0002 (ccw)
UDEV [2.167013] bind /devices/css0/0.0.0002/0.0.0002 (ccw)
KERNEL[2.167560] add /devices/virtual/workqueue/scsi_tmf_0 (workqueue)
UDEV [2.167977] add /devices/virtual/workqueue/scsi_tmf_0 (workqueue)
KERNEL[2.167987] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0 (scsi)
KERNEL[2.167996] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/scsi_host/host0 (scsi_host)
KERNEL[2.169113] change /0:0:0:0 (scsi)
UDEV [2.169212] change /0:0:0:0 (scsi)
KERNEL[2.199500] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0 (scsi)
KERNEL[2.199513] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
KERNEL[2.199523] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device)
KERNEL[2.199532] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk)
KERNEL[2.199564] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic)
KERNEL[2.199586] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg)
KERNEL[2.280482] add /devices/virtual/bdi/8:0 (bdi)
UDEV [2.280634] add /devices/virtual/bdi/8:0 (bdi)
KERNEL[3.060145] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/block/sda (block)
KERNEL[3.060160] bind /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
KERNEL[22.160147] bind /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
KERNEL[22.160161] add /bus/virtio/drivers/virtio_scsi (drivers)
KERNEL[22.160169] add /module/virtio_scsi (module)
UDEV [22.161078] add /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
UDEV [22.161339] add /bus/virtio/drivers/virtio_scsi (drivers)
UDEV [22.161860] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0 (scsi)
UDEV [22.161869] add /module/virtio_scsi (module)
UDEV [22.161880] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0 (scsi)
UDEV [22.161890] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/scsi_host/host0 (scsi_host)
UDEV [22.161901] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
UDEV [22.161911] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk)
UDEV [22.161924] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg)
UDEV [22.161937] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic)
UDEV [22.162123] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device)
UDEV [22.468924] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/block/sda (block)
UDEV [22.473955] bind /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
UDEV [22.473970] bind /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
The `udevadm monitor` output without this commit (QEMU 9876359990dd ("hw/scsi/lsi53c895a: add timer to scripts processing")):
…
KERNEL[2.091114] add /devices/virtual/workqueue/scsi_tmf_0 (workqueue)
UDEV [2.091218] add /devices/virtual/workqueue/scsi_tmf_0 (workqueue)
KERNEL[2.091408] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0 (scsi)
KERNEL[2.091418] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/scsi_host/host0 (scsi_host)
KERNEL[2.200461] bind /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
KERNEL[2.200473] add /bus/virtio/drivers/virtio_scsi (drivers)
KERNEL[2.200481] add /module/virtio_scsi (module)
UDEV [2.200634] add /module/virtio_scsi (module)
UDEV [2.200678] add /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
UDEV [2.200746] add /bus/virtio/drivers/virtio_scsi (drivers)
UDEV [2.200830] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0 (scsi)
UDEV [2.200972] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/scsi_host/host0 (scsi_host)
UDEV [2.201148] bind /devices/css0/0.0.0002/0.0.0002/virtio2 (virtio)
KERNEL[2.201699] change /0:0:0:0 (scsi)
KERNEL[2.201734] change /0:0:0:0 (scsi)
UDEV [2.201815] change /0:0:0:0 (scsi)
UDEV [2.201888] change /0:0:0:0 (scsi)
KERNEL[2.222062] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0 (scsi)
KERNEL[2.222074] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
KERNEL[2.222083] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device)
KERNEL[2.222092] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk)
KERNEL[2.222104] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic)
KERNEL[2.222127] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg)
UDEV [2.222241] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0 (scsi)
UDEV [2.222486] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
UDEV [2.222667] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk)
UDEV [2.222715] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg)
UDEV [2.222877] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device)
UDEV [2.223116] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic)
KERNEL[2.303063] add /devices/virtual/bdi/8:0 (bdi)
UDEV [2.303197] add /devices/virtual/bdi/8:0 (bdi)
KERNEL[2.394175] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/block/sda (block)
KERNEL[2.394186] bind /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
UDEV [2.706054] add /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0/block/sda (block)
UDEV [2.706075] bind /devices/css0/0.0.0002/0.0.0002/virtio2/host0/target0:0:0/0:0:0:0 (scsi)
I’ve used as host kernel 6.7.0-rc3-00033-ge72f947b4f0d and guest kernel
v6.5.0.
QEMU 'info qtree' output when the device was hotplugged:
bus: main-system-bus
type System
dev: s390-pcihost, id ""
x-config-reg-migration-enabled = true
bypass-iommu = false
bus: s390-pcibus.0
type s390-pcibus
bus: pci.0
type PCI
dev: virtual-css-bridge, id ""
css_dev_path = true
bus: virtual-css
type virtual-css-bus
dev: virtio-scsi-ccw, id "scsi0"
ioeventfd = true
max_revision = 2 (0x2)
devno = "fe.0.0002"
dev_id = "fe.0.0002"
subch_id = "fe.0.0002"
bus: virtio-bus
type virtio-ccw-bus
dev: virtio-scsi-device, id ""
num_queues = 1 (0x1)
virtqueue_size = 256 (0x100)
seg_max_adjust = true
max_sectors = 65535 (0xffff)
cmd_per_lun = 128 (0x80)
hotplug = true
param_change = true
indirect_desc = true
event_idx = true
notify_on_empty = true
any_layout = true
iommu_platform = false
packed = false
queue_reset = true
use-started = true
use-disabled-flag = true
x-disable-legacy-check = false
bus: scsi0.0
type SCSI
dev: scsi-generic, id "hostdev0"
drive = "libvirt-1-backend"
share-rw = false
io_timeout = 30 (0x1e)
channel = 0 (0x0)
scsi-id = 0 (0x0)
lun = 0 (0x0)
…
Any ideas?
Thanks in advance.
Kind regards,
Marc
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
2024-04-26 8:16 ` Marc Hartmayer
@ 2024-04-26 8:57 ` Dmitrii Gavrilov
2024-04-26 11:01 ` Marc Hartmayer
0 siblings, 1 reply; 11+ messages in thread
From: Dmitrii Gavrilov @ 2024-04-26 8:57 UTC (permalink / raw)
To: Marc Hartmayer, qemu-devel
Cc: pbonzini, berrange, eduardo, mlevitsk, vsementsov, yc-core,
Christian Borntraeger, Benjamin Block
[-- Attachment #1: Type: text/html, Size: 11239 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
2024-04-26 8:57 ` Dmitrii Gavrilov
@ 2024-04-26 11:01 ` Marc Hartmayer
0 siblings, 0 replies; 11+ messages in thread
From: Marc Hartmayer @ 2024-04-26 11:01 UTC (permalink / raw)
To: Dmitrii Gavrilov, qemu-devel
Cc: pbonzini, berrange, eduardo, mlevitsk, vsementsov, yc-core,
Christian Borntraeger, Benjamin Block
On Fri, Apr 26, 2024 at 11:57 AM +0300, Dmitrii Gavrilov <ds-gavr@yandex-team.ru> wrote:
> 26.04.2024, 11:17, "Marc Hartmayer" <mhartmay@linux.ibm.com>:
>
> On Fri, Nov 03, 2023 at 01:56 PM +0300, Dmitrii Gavrilov <ds-gavr@yandex-team.ru> wrote:
>
> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
> the failure case of qdev_device_add. It seems call of drain_call_rcu was
> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
> under happy path too. What led to overall performance degradation of
> qmp_device_add.
>
> In this patch call of drain_call_rcu moved under handling of failure of
> qdev_device_add.
>
> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
>
> I don't know the exact reason, but this commit caused udev events to
> show up much slower than before (~3s vs. ~23s) when a virtio-scsi device
> is hotplugged (I’ve tested this only on s390x). Importantly, this only
> happens when asynchronous SCSI scanning is disabled in the *guest*
> kernel (scsi_mod.scan=sync or CONFIG_SCSI_SCAN_ASYNC=n).
>
> The `udevadm monitor` output captured while hotplugging the device
> (using QEMU 012b170173bc ("system/qdev-monitor: move drain_call_rcu call
> under if (!dev) in qmp_device_add()")):
>
[…snip…]
> Any ideas?
>
> Thanks in advance.
>
> Kind regards,
> Marc
>
> Hello!
>
> Thank you for mentioning this.
>
> At first glance it seems that using scsi in synchonous mode caues the global
> QEMU mutex lock until the scanning phase is complete. Prior to 012b170173bc
> ("system/qdev-monitor: move drain_call_rcu call under if (!dev) in
> qmp_device_add()") on each device adition the lock would be forcibly removed
> allowing callbacks (including UDEV ones) to be processed after a new device
> is added.
>
> I`ll try to investigate this furter. But currently it appears to me like
> performance or observability dilemma.
I tried the test on my local laptop (x86_64) and there seems to be no
issue (I used the kernel cmdline option scsi_mod.scan=sync for the
guest) - guest and host kernel == 6.8.7. But please double check.
>
> Is behaviour you mentioning consistant?
Yep, at least for more than 50 iterations (I stopped the test then).
>
> Best regards,
> Dmitrii
--
Kind regards / Beste Grüße
Marc Hartmayer
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Wolfgang Wendt
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
2023-11-03 10:56 [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add() Dmitrii Gavrilov
` (4 preceding siblings ...)
2024-04-26 8:16 ` Marc Hartmayer
@ 2024-04-30 14:27 ` Igor Mammedov
2024-04-30 19:33 ` boris.ostrovsky
5 siblings, 1 reply; 11+ messages in thread
From: Igor Mammedov @ 2024-04-30 14:27 UTC (permalink / raw)
To: Dmitrii Gavrilov
Cc: qemu-devel, pbonzini, berrange, eduardo, mlevitsk, vsementsov,
yc-core, boris.ostrovsky
On Fri, 3 Nov 2023 13:56:02 +0300
Dmitrii Gavrilov <ds-gavr@yandex-team.ru> wrote:
Seems related to cpu hotpug issues,
CCing Boris for awareness.
> Original goal of addition of drain_call_rcu to qmp_device_add was to cover
> the failure case of qdev_device_add. It seems call of drain_call_rcu was
> misplaced in 7bed89958bfbf40df what led to waiting for pending RCU callbacks
> under happy path too. What led to overall performance degradation of
> qmp_device_add.
>
> In this patch call of drain_call_rcu moved under handling of failure of
> qdev_device_add.
>
> Signed-off-by: Dmitrii Gavrilov <ds-gavr@yandex-team.ru>
> ---
> system/qdev-monitor.c | 23 +++++++++++------------
> 1 file changed, 11 insertions(+), 12 deletions(-)
>
> diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
> index 1b8005a..dc7b02d 100644
> --- a/system/qdev-monitor.c
> +++ b/system/qdev-monitor.c
> @@ -856,19 +856,18 @@ void qmp_device_add(QDict *qdict, QObject **ret_data, Error **errp)
> return;
> }
> dev = qdev_device_add(opts, errp);
> -
> - /*
> - * Drain all pending RCU callbacks. This is done because
> - * some bus related operations can delay a device removal
> - * (in this case this can happen if device is added and then
> - * removed due to a configuration error)
> - * to a RCU callback, but user might expect that this interface
> - * will finish its job completely once qmp command returns result
> - * to the user
> - */
> - drain_call_rcu();
> -
> if (!dev) {
> + /*
> + * Drain all pending RCU callbacks. This is done because
> + * some bus related operations can delay a device removal
> + * (in this case this can happen if device is added and then
> + * removed due to a configuration error)
> + * to a RCU callback, but user might expect that this interface
> + * will finish its job completely once qmp command returns result
> + * to the user
> + */
> + drain_call_rcu();
> +
> qemu_opts_del(opts);
> return;
> }
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add()
2024-04-30 14:27 ` Igor Mammedov
@ 2024-04-30 19:33 ` boris.ostrovsky
0 siblings, 0 replies; 11+ messages in thread
From: boris.ostrovsky @ 2024-04-30 19:33 UTC (permalink / raw)
To: Igor Mammedov, Dmitrii Gavrilov
Cc: qemu-devel, pbonzini, berrange, eduardo, mlevitsk, vsementsov, yc-core
On 4/30/24 10:27 AM, Igor Mammedov wrote:
> On Fri, 3 Nov 2023 13:56:02 +0300
> Dmitrii Gavrilov <ds-gavr@yandex-team.ru> wrote:
>
> Seems related to cpu hotpug issues,
> CCing Boris for awareness.
Thank you Igor.
This patch appears to change timing in my test which makes the problem
much more difficult to reproduce. However, it can still be triggered if
I insert a delay after qdev_device_add() which is roughly equivalent to
what was happening in drain_call_rcu().
(https://lore.kernel.org/kvm/534247e4-76d6-41d2-86c7-0155406ccd80@oracle.com/
for context)
-boris
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2024-04-30 20:01 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-03 10:56 [PATCH] system/qdev-monitor: move drain_call_rcu call under if (!dev) in qmp_device_add() Dmitrii Gavrilov
2023-11-07 7:28 ` Vladimir Sementsov-Ogievskiy
2023-11-07 7:32 ` Michael S. Tsirkin
2023-11-07 9:01 ` Vladimir Sementsov-Ogievskiy
2024-02-28 17:12 ` Vladimir Sementsov-Ogievskiy
2024-02-29 21:24 ` Paolo Bonzini
2024-04-26 8:16 ` Marc Hartmayer
2024-04-26 8:57 ` Dmitrii Gavrilov
2024-04-26 11:01 ` Marc Hartmayer
2024-04-30 14:27 ` Igor Mammedov
2024-04-30 19:33 ` boris.ostrovsky
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.