* [Qemu-devel] question: disk missing in the guest contingently when hotplug several virtio scsi disks consecutively.
@ 2018-06-27 10:33 l00284672
2018-06-28 11:37 ` l00284672
2018-07-02 13:15 ` Stefan Hajnoczi
0 siblings, 2 replies; 8+ messages in thread
From: l00284672 @ 2018-06-27 10:33 UTC (permalink / raw)
To: famz, stefanha, kwolf, pbonzini; +Cc: qemu-devel, lizhengui
[-- Attachment #1: Type: text/plain, Size: 1938 bytes --]
Hi, I found a bug that disk missing (not all disks missing ) in the
guest contingently when hotplug several virtio scsi disks
consecutively. After rebooting the guest,
the missing disks appear again.
The guest is centos7.3 running on a centos7.3 host and the scsi
controllers are configed with iothread. The scsi controller xml is below:
<controller type='scsi' index='0' model='virtio-scsi'>
<driver iothread='26'/>
<alias name='scsi0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</controller>
<controller type='scsi' index='1' model='virtio-scsi'>
<driver iothread='27'/>
<alias name='scsi1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a'
function='0x0'/>
</controller>
<controller type='scsi' index='2' model='virtio-scsi'>
<driver iothread='28'/>
<alias name='scsi2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0b'
function='0x0'/>
</controller>
<controller type='scsi' index='3' model='virtio-scsi'>
<driver iothread='29'/>
<alias name='scsi3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0c'
function='0x0'/>
</controller>
If the scsi controllers are configed without iothread, disks are all
can be seen in the guest when hotplug several virtio scsi disks
consecutively.
I think the biggest difference between them is that scsi controllers
with iothread call virtio_notify_irqfd to notify guest and scsi
controllers without
iothread call virtio_notify instead. What make it difference? Will
interrupts are lost when call virtio_notify_irqfd due to race condition
for some
unknow reasons? Maybe guys more familiar with scsi dataplane can help.
Thanks for your reply!
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: lizhengui.vcf --]
[-- Type: text/x-vcard; name="lizhengui.vcf", Size: 4 bytes --]
null
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] question: disk missing in the guest contingently when hotplug several virtio scsi disks consecutively.
2018-06-27 10:33 [Qemu-devel] question: disk missing in the guest contingently when hotplug several virtio scsi disks consecutively l00284672
@ 2018-06-28 11:37 ` l00284672
2018-07-02 13:15 ` Stefan Hajnoczi
1 sibling, 0 replies; 8+ messages in thread
From: l00284672 @ 2018-06-28 11:37 UTC (permalink / raw)
To: famz, stefanha, kwolf, pbonzini; +Cc: qemu-devel, lizhengui
[-- Attachment #1: Type: text/plain, Size: 2063 bytes --]
ping
On 2018/6/27 18:33, l00284672 wrote:
>
> Hi, I found a bug that disk missing (not all disks missing ) in the
> guest contingently when hotplug several virtio scsi disks
> consecutively. After rebooting the guest,
>
> the missing disks appear again.
>
> The guest is centos7.3 running on a centos7.3 host and the scsi
> controllers are configed with iothread. The scsi controller xml is below:
>
> <controller type='scsi' index='0' model='virtio-scsi'>
> <driver iothread='26'/>
> <alias name='scsi0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
> function='0x0'/>
> </controller>
> <controller type='scsi' index='1' model='virtio-scsi'>
> <driver iothread='27'/>
> <alias name='scsi1'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a'
> function='0x0'/>
> </controller>
> <controller type='scsi' index='2' model='virtio-scsi'>
> <driver iothread='28'/>
> <alias name='scsi2'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b'
> function='0x0'/>
> </controller>
> <controller type='scsi' index='3' model='virtio-scsi'>
> <driver iothread='29'/>
> <alias name='scsi3'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x0c'
> function='0x0'/>
> </controller>
>
>
> If the scsi controllers are configed without iothread, disks are all
> can be seen in the guest when hotplug several virtio scsi disks
> consecutively.
>
> I think the biggest difference between them is that scsi controllers
> with iothread call virtio_notify_irqfd to notify guest and scsi
> controllers without
>
> iothread call virtio_notify instead. What make it difference? Will
> interrupts are lost when call virtio_notify_irqfd due to race
> condition for some
>
> unknow reasons? Maybe guys more familiar with scsi dataplane can help.
> Thanks for your reply!
>
>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: lizhengui.vcf --]
[-- Type: text/x-vcard; name="lizhengui.vcf", Size: 4 bytes --]
null
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] question: disk missing in the guest contingently when hotplug several virtio scsi disks consecutively.
2018-06-27 10:33 [Qemu-devel] question: disk missing in the guest contingently when hotplug several virtio scsi disks consecutively l00284672
2018-06-28 11:37 ` l00284672
@ 2018-07-02 13:15 ` Stefan Hajnoczi
2018-07-03 7:20 ` l00284672
1 sibling, 1 reply; 8+ messages in thread
From: Stefan Hajnoczi @ 2018-07-02 13:15 UTC (permalink / raw)
To: l00284672; +Cc: famz, kwolf, pbonzini, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 388 bytes --]
On Wed, Jun 27, 2018 at 06:33:16PM +0800, l00284672 wrote:
> Hi, I found a bug that disk missing (not all disks missing ) in the guest
> contingently when hotplug several virtio scsi disks consecutively. After
> rebooting the guest,
For the record, there is also a bug report here:
https://bugs.launchpad.net/qemu/+bug/1779120
I plan to reproduce this issue soon.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] question: disk missing in the guest contingently when hotplug several virtio scsi disks consecutively.
2018-07-02 13:15 ` Stefan Hajnoczi
@ 2018-07-03 7:20 ` l00284672
2018-07-03 13:30 ` l00284672
2018-07-03 16:27 ` Paolo Bonzini
0 siblings, 2 replies; 8+ messages in thread
From: l00284672 @ 2018-07-03 7:20 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: famz, kwolf, pbonzini, lizhengui, Fangyi (C), qemu-devel
[-- Attachment #1: Type: text/plain, Size: 4428 bytes --]
The disk missing due to calling scsi_probe_lun failed in guest. The
guest code is below:
static int scsi_probe_lun(struct scsi_device *sdev, unsigned char
*inq_result,
int result_len, int *bflags)
{
.........
result = scsi_execute_req(sdev, scsi_cmd, DMA_FROM_DEVICE,
inq_result, try_inquiry_len, &sshdr,
HZ / 2 + HZ * scsi_inq_timeout, 3,
&resid);
.........
}
The scsi inqury request from guest is cancelled by qemu. The qemu bt is
below:
(gdb) bt
#0 scsi_req_cancel_async (req=0x7f86d00055c0, notifier=0x0) at
hw/scsi/scsi-bus.c:1825
#1 0x000055b50ac254c7 in scsi_device_purge_requests
(sdev=sdev@entry=0x55b50f9a8990, sense=...) at hw/scsi/scsi-bus.c:1925
#2 0x000055b50ac1d337 in scsi_disk_reset (dev=0x55b50f9a8990) at
hw/scsi/scsi-disk.c:2186
#3 0x000055b50ab95d91 in device_set_realized (obj=<optimized out>,
value=<optimized out>, errp=0x7ffedda81898) at hw/core/qdev.c:949
#4 0x000055b50acd167e in property_set_bool (obj=0x55b50f9a8990,
v=<optimized out>, name=<optimized out>, opaque=0x55b50f9c1b50,
errp=0x7ffedda81898) at qom/object.c:1854
#5 0x000055b50acd5091 in object_property_set_qobject
(obj=obj@entry=0x55b50f9a8990, value=value@entry=0x55b50f997350,
name=name@entry=0x55b50ae0260b "realized",
errp=errp@entry=0x7ffedda81898) at qom/qom-qobject.c:27
#6 0x000055b50acd3210 in object_property_set_bool (obj=0x55b50f9a8990,
value=<optimized out>, name=0x55b50ae0260b "realized",
errp=0x7ffedda81898) at qom/object.c:1157
#7 0x000055b50ab1e0b5 in qdev_device_add
(opts=opts@entry=0x55b50ceb9880, errp=errp@entry=0x7ffedda81978) at
qdev-monitor.c:627
#8 0x000055b50ab1e68b in qmp_device_add (qdict=<optimized out>,
ret_data=<optimized out>, errp=0x7ffedda819c0) at qdev-monitor.c:807
#9 0x000055b50ad78787 in do_qmp_dispatch (errp=0x7ffedda819b8,
request=0x55b50c9605c0) at qapi/qmp-dispatch.c:114
#10 qmp_dispatch (request=request@entry=0x55b50d447000) at
qapi/qmp-dispatch.c:141
#11 0x000055b50aa3a102 in handle_qmp_command (parser=<optimized out>,
tokens=<optimized out>) at
/mnt/sdb/lzg/code/UVP_2.5_CODE/qemu/monitor.c:3907
#12 0x000055b50ad7da54 in json_message_process_token
(lexer=0x55b50c5d4458, input=0x55b50c55a6e0, type=JSON_RCURLY, x=181,
y=1449) at qobject/json-streamer.c:105
#13 0x000055b50ad9fdfb in json_lexer_feed_char
(lexer=lexer@entry=0x55b50c5d4458, ch=125 '}', flush=flush@entry=false)
at qobject/json-lexer.c:319
#14 0x000055b50ad9febe in json_lexer_feed (lexer=0x55b50c5d4458,
buffer=<optimized out>, size=<optimized out>) at qobject/json-lexer.c:369
#15 0x000055b50ad7db19 in json_message_parser_feed (parser=<optimized
out>, buffer=<optimized out>, size=<optimized out>) at
qobject/json-streamer.c:124
#16 0x000055b50aa389bb in monitor_qmp_read (opaque=<optimized out>,
buf=<optimized out>, size=<optimized out>) at
/mnt/sdb/lzg/code/UVP_2.5_CODE/qemu/monitor.c:3937
#17 0x000055b50ab23e26 in tcp_chr_read (chan=<optimized out>,
cond=<optimized out>, opaque=0x55b50c5d0fa0) at qemu-char.c:3253
#18 0x00007f8754cc499a in g_main_context_dispatch () from
/usr/lib64/libglib-2.0.so.0
#19 0x000055b50acded8c in glib_pollfds_poll () at main-loop.c:228
#20 os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:273
#21 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:521
#22 0x000055b50a9ff8ff in main_loop () at vl.c:2095
static void device_set_realized(Object *obj, bool value, Error **errp)
{
.......
if (hotplug_ctrl) {
hotplug_handler_plug(hotplug_ctrl, dev, &local_err);
}
.......
if (dev->hotplugged) {
device_reset(dev);
}
......
}
The hotplug_handler_plug call virtio_scsi_hotplug. If iothread handle a
request between hotplug_handler_plug and device_reset, the request
will be cancelled by
scsi_device_purge_requests in device_reset.
On 2018/7/2 21:15, Stefan Hajnoczi wrote:
> On Wed, Jun 27, 2018 at 06:33:16PM +0800, l00284672 wrote:
>> Hi, I found a bug that disk missing (not all disks missing ) in the guest
>> contingently when hotplug several virtio scsi disks consecutively. After
>> rebooting the guest,
> For the record, there is also a bug report here:
>
> https://bugs.launchpad.net/qemu/+bug/1779120
>
> I plan to reproduce this issue soon.
>
> Stefan
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: lizhengui.vcf --]
[-- Type: text/x-vcard; name="lizhengui.vcf", Size: 4 bytes --]
null
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] question: disk missing in the guest contingently when hotplug several virtio scsi disks consecutively.
2018-07-03 7:20 ` l00284672
@ 2018-07-03 13:30 ` l00284672
2018-07-04 12:50 ` Paolo Bonzini
2018-07-03 16:27 ` Paolo Bonzini
1 sibling, 1 reply; 8+ messages in thread
From: l00284672 @ 2018-07-03 13:30 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: famz, kwolf, pbonzini, Fangyi (C), qemu-devel
[-- Attachment #1: Type: text/plain, Size: 5607 bytes --]
I got a solution, the patch is below:
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 608fb18..4cdc2bb 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2184,7 +2184,9 @@ static void scsi_disk_reset(DeviceState *dev)
SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev.qdev, dev);
uint64_t nb_sectors;
- scsi_device_purge_requests(&s->qdev, SENSE_CODE(RESET));
+ if (dev->realized) {
+ scsi_device_purge_requests(&s->qdev, SENSE_CODE(RESET));
+ }
blk_get_geometry(s->qdev.conf.blk, &nb_sectors);
nb_sectors /= s->qdev.blocksize / 512;
The commit
https://git.qemu.org/?p=qemu.git;a=commitdiff;h=e9447f35718439c1affdee3ef69b2fee50c8106c;hp=4c64d5b52ed5287bb31489bba39cf41628230bc
introduced this problem.
I think there should not be any IO requests brefore device is
realized, but I have no idea how to avoid it after calling
virtio_scsi_push_event.
On 2018/7/3 15:20, l00284672 wrote:
> The disk missing due to calling scsi_probe_lun failed in guest. The
> guest code is below:
> static int scsi_probe_lun(struct scsi_device *sdev, unsigned char
> *inq_result,
> int result_len, int *bflags)
>
> {
> .........
> result = scsi_execute_req(sdev, scsi_cmd, DMA_FROM_DEVICE,
> inq_result, try_inquiry_len, &sshdr,
> HZ / 2 + HZ * scsi_inq_timeout, 3,
> &resid);
>
> .........
> }
>
> The scsi inqury request from guest is cancelled by qemu. The qemu bt
> is below:
>
> (gdb) bt
> #0 scsi_req_cancel_async (req=0x7f86d00055c0, notifier=0x0) at
> hw/scsi/scsi-bus.c:1825
> #1 0x000055b50ac254c7 in scsi_device_purge_requests
> (sdev=sdev@entry=0x55b50f9a8990, sense=...) at hw/scsi/scsi-bus.c:1925
> #2 0x000055b50ac1d337 in scsi_disk_reset (dev=0x55b50f9a8990) at
> hw/scsi/scsi-disk.c:2186
> #3 0x000055b50ab95d91 in device_set_realized (obj=<optimized out>,
> value=<optimized out>, errp=0x7ffedda81898) at hw/core/qdev.c:949
> #4 0x000055b50acd167e in property_set_bool (obj=0x55b50f9a8990,
> v=<optimized out>, name=<optimized out>, opaque=0x55b50f9c1b50,
> errp=0x7ffedda81898) at qom/object.c:1854
> #5 0x000055b50acd5091 in object_property_set_qobject
> (obj=obj@entry=0x55b50f9a8990, value=value@entry=0x55b50f997350,
> name=name@entry=0x55b50ae0260b "realized",
> errp=errp@entry=0x7ffedda81898) at qom/qom-qobject.c:27
> #6 0x000055b50acd3210 in object_property_set_bool
> (obj=0x55b50f9a8990, value=<optimized out>, name=0x55b50ae0260b
> "realized", errp=0x7ffedda81898) at qom/object.c:1157
> #7 0x000055b50ab1e0b5 in qdev_device_add
> (opts=opts@entry=0x55b50ceb9880, errp=errp@entry=0x7ffedda81978) at
> qdev-monitor.c:627
> #8 0x000055b50ab1e68b in qmp_device_add (qdict=<optimized out>,
> ret_data=<optimized out>, errp=0x7ffedda819c0) at qdev-monitor.c:807
> #9 0x000055b50ad78787 in do_qmp_dispatch (errp=0x7ffedda819b8,
> request=0x55b50c9605c0) at qapi/qmp-dispatch.c:114
> #10 qmp_dispatch (request=request@entry=0x55b50d447000) at
> qapi/qmp-dispatch.c:141
> #11 0x000055b50aa3a102 in handle_qmp_command (parser=<optimized out>,
> tokens=<optimized out>) at
> /mnt/sdb/lzg/code/UVP_2.5_CODE/qemu/monitor.c:3907
> #12 0x000055b50ad7da54 in json_message_process_token
> (lexer=0x55b50c5d4458, input=0x55b50c55a6e0, type=JSON_RCURLY, x=181,
> y=1449) at qobject/json-streamer.c:105
> #13 0x000055b50ad9fdfb in json_lexer_feed_char
> (lexer=lexer@entry=0x55b50c5d4458, ch=125 '}',
> flush=flush@entry=false) at qobject/json-lexer.c:319
> #14 0x000055b50ad9febe in json_lexer_feed (lexer=0x55b50c5d4458,
> buffer=<optimized out>, size=<optimized out>) at qobject/json-lexer.c:369
> #15 0x000055b50ad7db19 in json_message_parser_feed (parser=<optimized
> out>, buffer=<optimized out>, size=<optimized out>) at
> qobject/json-streamer.c:124
> #16 0x000055b50aa389bb in monitor_qmp_read (opaque=<optimized out>,
> buf=<optimized out>, size=<optimized out>) at
> /mnt/sdb/lzg/code/UVP_2.5_CODE/qemu/monitor.c:3937
> #17 0x000055b50ab23e26 in tcp_chr_read (chan=<optimized out>,
> cond=<optimized out>, opaque=0x55b50c5d0fa0) at qemu-char.c:3253
> #18 0x00007f8754cc499a in g_main_context_dispatch () from
> /usr/lib64/libglib-2.0.so.0
> #19 0x000055b50acded8c in glib_pollfds_poll () at main-loop.c:228
> #20 os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:273
> #21 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:521
> #22 0x000055b50a9ff8ff in main_loop () at vl.c:2095
>
> static void device_set_realized(Object *obj, bool value, Error **errp)
> {
> .......
> if (hotplug_ctrl) {
> hotplug_handler_plug(hotplug_ctrl, dev, &local_err);
> }
>
> .......
> if (dev->hotplugged) {
> device_reset(dev);
> }
> ......
> }
>
> The hotplug_handler_plug call virtio_scsi_hotplug. If iothread handle
> a request between hotplug_handler_plug and device_reset, the request
> will be cancelled by
> scsi_device_purge_requests in device_reset.
>
> On 2018/7/2 21:15, Stefan Hajnoczi wrote:
>> On Wed, Jun 27, 2018 at 06:33:16PM +0800, l00284672 wrote:
>>> Hi, I found a bug that disk missing (not all disks missing ) in the
>>> guest
>>> contingently when hotplug several virtio scsi disks consecutively.
>>> After
>>> rebooting the guest,
>> For the record, there is also a bug report here:
>>
>> https://bugs.launchpad.net/qemu/+bug/1779120
>>
>> I plan to reproduce this issue soon.
>>
>> Stefan
>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: lizhengui.vcf --]
[-- Type: text/x-vcard; name="lizhengui.vcf", Size: 4 bytes --]
null
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] question: disk missing in the guest contingently when hotplug several virtio scsi disks consecutively.
2018-07-03 7:20 ` l00284672
2018-07-03 13:30 ` l00284672
@ 2018-07-03 16:27 ` Paolo Bonzini
2018-07-04 1:20 ` l00284672
1 sibling, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2018-07-03 16:27 UTC (permalink / raw)
To: l00284672, Stefan Hajnoczi; +Cc: famz, kwolf, Fangyi (C), qemu-devel
On 03/07/2018 09:20, l00284672 wrote:
>
> }
>
> The scsi inqury request from guest is cancelled by qemu. The qemu bt is
> below:
>
> (gdb) bt
> #0 scsi_req_cancel_async (req=0x7f86d00055c0, notifier=0x0) at
> hw/scsi/scsi-bus.c:1825
> #1 0x000055b50ac254c7 in scsi_device_purge_requests
> (sdev=sdev@entry=0x55b50f9a8990, sense=...) at hw/scsi/scsi-bus.c:1925
> #2 0x000055b50ac1d337 in scsi_disk_reset (dev=0x55b50f9a8990) at
> hw/scsi/scsi-disk.c:2186
> #3 0x000055b50ab95d91 in device_set_realized (obj=<optimized out>,
> value=<optimized out>, errp=0x7ffedda81898) at hw/core/qdev.c:949
> #4 0x000055b50acd167e in property_set_bool (obj=0x55b50f9a8990,
> v=<optimized out>, name=<optimized out>, opaque=0x55b50f9c1b50,
> errp=0x7ffedda81898) at qom/object.c:1854
> #5 0x000055b50acd5091 in object_property_set_qobject
> (obj=obj@entry=0x55b50f9a8990, value=value@entry=0x55b50f997350,
> name=name@entry=0x55b50ae0260b "realized",
> errp=errp@entry=0x7ffedda81898) at qom/qom-qobject.c:27
> #6 0x000055b50acd3210 in object_property_set_bool (obj=0x55b50f9a8990,
> value=<optimized out>, name=0x55b50ae0260b "realized",
> errp=0x7ffedda81898) at qom/object.c:1157
> #7 0x000055b50ab1e0b5 in qdev_device_add
> (opts=opts@entry=0x55b50ceb9880, errp=errp@entry=0x7ffedda81978) at
> qdev-monitor.c:627
It looks like the hotplug event is sent too early to the guest?
Paolo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] question: disk missing in the guest contingently when hotplug several virtio scsi disks consecutively.
2018-07-03 16:27 ` Paolo Bonzini
@ 2018-07-04 1:20 ` l00284672
0 siblings, 0 replies; 8+ messages in thread
From: l00284672 @ 2018-07-04 1:20 UTC (permalink / raw)
To: Paolo Bonzini, Stefan Hajnoczi; +Cc: famz, kwolf, Fangyi (C), qemu-devel
[-- Attachment #1: Type: text/plain, Size: 1535 bytes --]
yes, I also think so.
On 2018/7/4 0:27, Paolo Bonzini wrote:
> On 03/07/2018 09:20, l00284672 wrote:
>> }
>>
>> The scsi inqury request from guest is cancelled by qemu. The qemu bt is
>> below:
>>
>> (gdb) bt
>> #0 scsi_req_cancel_async (req=0x7f86d00055c0, notifier=0x0) at
>> hw/scsi/scsi-bus.c:1825
>> #1 0x000055b50ac254c7 in scsi_device_purge_requests
>> (sdev=sdev@entry=0x55b50f9a8990, sense=...) at hw/scsi/scsi-bus.c:1925
>> #2 0x000055b50ac1d337 in scsi_disk_reset (dev=0x55b50f9a8990) at
>> hw/scsi/scsi-disk.c:2186
>> #3 0x000055b50ab95d91 in device_set_realized (obj=<optimized out>,
>> value=<optimized out>, errp=0x7ffedda81898) at hw/core/qdev.c:949
>> #4 0x000055b50acd167e in property_set_bool (obj=0x55b50f9a8990,
>> v=<optimized out>, name=<optimized out>, opaque=0x55b50f9c1b50,
>> errp=0x7ffedda81898) at qom/object.c:1854
>> #5 0x000055b50acd5091 in object_property_set_qobject
>> (obj=obj@entry=0x55b50f9a8990, value=value@entry=0x55b50f997350,
>> name=name@entry=0x55b50ae0260b "realized",
>> errp=errp@entry=0x7ffedda81898) at qom/qom-qobject.c:27
>> #6 0x000055b50acd3210 in object_property_set_bool (obj=0x55b50f9a8990,
>> value=<optimized out>, name=0x55b50ae0260b "realized",
>> errp=0x7ffedda81898) at qom/object.c:1157
>> #7 0x000055b50ab1e0b5 in qdev_device_add
>> (opts=opts@entry=0x55b50ceb9880, errp=errp@entry=0x7ffedda81978) at
>> qdev-monitor.c:627
> It looks like the hotplug event is sent too early to the guest?
>
> Paolo
>
>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: lizhengui.vcf --]
[-- Type: text/x-vcard; name="lizhengui.vcf", Size: 4 bytes --]
null
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] question: disk missing in the guest contingently when hotplug several virtio scsi disks consecutively.
2018-07-03 13:30 ` l00284672
@ 2018-07-04 12:50 ` Paolo Bonzini
0 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2018-07-04 12:50 UTC (permalink / raw)
To: l00284672, Stefan Hajnoczi
Cc: famz, kwolf, Fangyi (C), qemu-devel, David Hildenbrand
On 03/07/2018 15:30, l00284672 wrote:
> I got a solution, the patch is below:
>
> diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
> index 608fb18..4cdc2bb 100644
> --- a/hw/scsi/scsi-disk.c
> +++ b/hw/scsi/scsi-disk.c
> @@ -2184,7 +2184,9 @@ static void scsi_disk_reset(DeviceState *dev)
> SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev.qdev, dev);
> uint64_t nb_sectors;
>
> - scsi_device_purge_requests(&s->qdev, SENSE_CODE(RESET));
> + if (dev->realized) {
> + scsi_device_purge_requests(&s->qdev, SENSE_CODE(RESET));
> + }
>
> blk_get_geometry(s->qdev.conf.blk, &nb_sectors);
> nb_sectors /= s->qdev.blocksize / 512;
>
> The commit
> https://git.qemu.org/?p=qemu.git;a=commitdiff;h=e9447f35718439c1affdee3ef69b2fee50c8106c;hp=4c64d5b52ed5287bb31489bba39cf41628230bc
> introduced this problem.
>
> I think there should not be any IO requests brefore device is
> realized, but I have no idea how to avoid it after calling
> virtio_scsi_push_event.
virtio_scsi_push_event is called too soon. David, now that you have
introduced pre_plug in HotplugHandler, it seems to me that the "plug"
callback should be called after dev->realized is set to true.
In general, this means splitting the existing plug callbacks into a
pre_plug that checks for errors and a plug that does the operation.
However, the one use of pre_plug that you introduced for memory hotplug
has a plug callback that returns errors (namely pc_memory_plug via
pc_dimm_plug, where it sets properties). I must be missing something,
why are properties being set after the realize method is called? Could
these operations actually fail, or could the object_set_property_* calls
use &error_abort as the error pointer?
Thanks,
Paolo
>
> On 2018/7/3 15:20, l00284672 wrote:
>> The disk missing due to calling scsi_probe_lun failed in guest. The
>> guest code is below:
>> static int scsi_probe_lun(struct scsi_device *sdev, unsigned char
>> *inq_result,
>> int result_len, int *bflags)
>>
>> {
>> .........
>> result = scsi_execute_req(sdev, scsi_cmd, DMA_FROM_DEVICE,
>> inq_result, try_inquiry_len, &sshdr,
>> HZ / 2 + HZ * scsi_inq_timeout, 3,
>> &resid);
>>
>> .........
>> }
>>
>> The scsi inqury request from guest is cancelled by qemu. The qemu bt
>> is below:
>>
>> (gdb) bt
>> #0 scsi_req_cancel_async (req=0x7f86d00055c0, notifier=0x0) at
>> hw/scsi/scsi-bus.c:1825
>> #1 0x000055b50ac254c7 in scsi_device_purge_requests
>> (sdev=sdev@entry=0x55b50f9a8990, sense=...) at hw/scsi/scsi-bus.c:1925
>> #2 0x000055b50ac1d337 in scsi_disk_reset (dev=0x55b50f9a8990) at
>> hw/scsi/scsi-disk.c:2186
>> #3 0x000055b50ab95d91 in device_set_realized (obj=<optimized out>,
>> value=<optimized out>, errp=0x7ffedda81898) at hw/core/qdev.c:949
>> #4 0x000055b50acd167e in property_set_bool (obj=0x55b50f9a8990,
>> v=<optimized out>, name=<optimized out>, opaque=0x55b50f9c1b50,
>> errp=0x7ffedda81898) at qom/object.c:1854
>> #5 0x000055b50acd5091 in object_property_set_qobject
>> (obj=obj@entry=0x55b50f9a8990, value=value@entry=0x55b50f997350,
>> name=name@entry=0x55b50ae0260b "realized",
>> errp=errp@entry=0x7ffedda81898) at qom/qom-qobject.c:27
>> #6 0x000055b50acd3210 in object_property_set_bool
>> (obj=0x55b50f9a8990, value=<optimized out>, name=0x55b50ae0260b
>> "realized", errp=0x7ffedda81898) at qom/object.c:1157
>> #7 0x000055b50ab1e0b5 in qdev_device_add
>> (opts=opts@entry=0x55b50ceb9880, errp=errp@entry=0x7ffedda81978) at
>> qdev-monitor.c:627
>> #8 0x000055b50ab1e68b in qmp_device_add (qdict=<optimized out>,
>> ret_data=<optimized out>, errp=0x7ffedda819c0) at qdev-monitor.c:807
>> #9 0x000055b50ad78787 in do_qmp_dispatch (errp=0x7ffedda819b8,
>> request=0x55b50c9605c0) at qapi/qmp-dispatch.c:114
>> #10 qmp_dispatch (request=request@entry=0x55b50d447000) at
>> qapi/qmp-dispatch.c:141
>> #11 0x000055b50aa3a102 in handle_qmp_command (parser=<optimized out>,
>> tokens=<optimized out>) at
>> /mnt/sdb/lzg/code/UVP_2.5_CODE/qemu/monitor.c:3907
>> #12 0x000055b50ad7da54 in json_message_process_token
>> (lexer=0x55b50c5d4458, input=0x55b50c55a6e0, type=JSON_RCURLY, x=181,
>> y=1449) at qobject/json-streamer.c:105
>> #13 0x000055b50ad9fdfb in json_lexer_feed_char
>> (lexer=lexer@entry=0x55b50c5d4458, ch=125 '}',
>> flush=flush@entry=false) at qobject/json-lexer.c:319
>> #14 0x000055b50ad9febe in json_lexer_feed (lexer=0x55b50c5d4458,
>> buffer=<optimized out>, size=<optimized out>) at qobject/json-lexer.c:369
>> #15 0x000055b50ad7db19 in json_message_parser_feed (parser=<optimized
>> out>, buffer=<optimized out>, size=<optimized out>) at
>> qobject/json-streamer.c:124
>> #16 0x000055b50aa389bb in monitor_qmp_read (opaque=<optimized out>,
>> buf=<optimized out>, size=<optimized out>) at
>> /mnt/sdb/lzg/code/UVP_2.5_CODE/qemu/monitor.c:3937
>> #17 0x000055b50ab23e26 in tcp_chr_read (chan=<optimized out>,
>> cond=<optimized out>, opaque=0x55b50c5d0fa0) at qemu-char.c:3253
>> #18 0x00007f8754cc499a in g_main_context_dispatch () from
>> /usr/lib64/libglib-2.0.so.0
>> #19 0x000055b50acded8c in glib_pollfds_poll () at main-loop.c:228
>> #20 os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:273
>> #21 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:521
>> #22 0x000055b50a9ff8ff in main_loop () at vl.c:2095
>>
>> static void device_set_realized(Object *obj, bool value, Error **errp)
>> {
>> .......
>> if (hotplug_ctrl) {
>> hotplug_handler_plug(hotplug_ctrl, dev, &local_err);
>> }
>>
>> .......
>> if (dev->hotplugged) {
>> device_reset(dev);
>> }
>> ......
>> }
>>
>> The hotplug_handler_plug call virtio_scsi_hotplug. If iothread handle
>> a request between hotplug_handler_plug and device_reset, the request
>> will be cancelled by
>> scsi_device_purge_requests in device_reset.
>>
>> On 2018/7/2 21:15, Stefan Hajnoczi wrote:
>>> On Wed, Jun 27, 2018 at 06:33:16PM +0800, l00284672 wrote:
>>>> Hi, I found a bug that disk missing (not all disks missing ) in the
>>>> guest
>>>> contingently when hotplug several virtio scsi disks consecutively.
>>>> After
>>>> rebooting the guest,
>>> For the record, there is also a bug report here:
>>>
>>> https://bugs.launchpad.net/qemu/+bug/1779120
>>>
>>> I plan to reproduce this issue soon.
>>>
>>> Stefan
>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2018-07-04 12:50 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-27 10:33 [Qemu-devel] question: disk missing in the guest contingently when hotplug several virtio scsi disks consecutively l00284672
2018-06-28 11:37 ` l00284672
2018-07-02 13:15 ` Stefan Hajnoczi
2018-07-03 7:20 ` l00284672
2018-07-03 13:30 ` l00284672
2018-07-04 12:50 ` Paolo Bonzini
2018-07-03 16:27 ` Paolo Bonzini
2018-07-04 1:20 ` l00284672
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.