From: Jason Wang <jasowang@redhat.com>
To: gaowanlong@cn.fujitsu.com
Cc: krkumar2@in.ibm.com, aliguori@us.ibm.com, kvm@vger.kernel.org,
mst@redhat.com, mprivozn@redhat.com, rusty@rustcorp.com.au,
qemu-devel@nongnu.org, stefanha@redhat.com,
jwhan@filewood.snu.ac.kr, shiyer@redhat.com
Subject: Re: [Qemu-devel] [PATCH 10/12] virtio-net: multiqueue support
Date: Thu, 10 Jan 2013 17:40:52 +0800 [thread overview]
Message-ID: <50EE8CA4.5080906@redhat.com> (raw)
In-Reply-To: <50EE84A9.8070301@cn.fujitsu.com>
On 01/10/2013 05:06 PM, Wanlong Gao wrote:
> On 01/10/2013 03:16 PM, Jason Wang wrote:
>> On Thursday, January 10, 2013 02:49:14 PM Wanlong Gao wrote:
>>> On 01/10/2013 02:43 PM, Jason Wang wrote:
>>>> On Wednesday, January 09, 2013 11:26:33 PM Jason Wang wrote:
>>>>> On 01/09/2013 06:01 PM, Wanlong Gao wrote:
>>>>>> On 01/09/2013 05:30 PM, Jason Wang wrote:
>>>>>>> On 01/09/2013 04:23 PM, Wanlong Gao wrote:
>>>>>>>> On 01/08/2013 06:14 PM, Jason Wang wrote:
>>>>>>>>> On 01/08/2013 06:00 PM, Wanlong Gao wrote:
>>>>>>>>>> On 01/08/2013 05:51 PM, Jason Wang wrote:
>>>>>>>>>>> On 01/08/2013 05:49 PM, Wanlong Gao wrote:
>>>>>>>>>>>> On 01/08/2013 05:29 PM, Jason Wang wrote:
>>>>>>>>>>>>> On 01/08/2013 05:07 PM, Wanlong Gao wrote:
>>>>>>>>>>>>>> On 12/28/2012 06:32 PM, Jason Wang wrote:
>>>>>>>>>>>>>>> + } else if (nc->peer->info->type !=
>>>>>>>>>>>>>>> NET_CLIENT_OPTIONS_KIND_TAP) {
>>>>>>>>>>>>>>> + ret = -1;
>>>>>>>>>>>>>>> + } else {
>>>>>>>>>>>>>>> + ret = tap_detach(nc->peer);
>>>>>>>>>>>>>>> + }
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>> + return ret;
>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>> +
>>>> [...]
>>>>
>>>>>>>> I got guest kernel panic when using this way and set queues=4.
>>>>>>> Does it happens w/o or w/ a fd parameter? What's the qemu command line?
>>>>>>> Did you meet it during boot time?
>>>>>> The QEMU command line is
>>>>>>
>>>>>> /work/git/qemu/x86_64-softmmu/qemu-system-x86_64 -name f17 -M pc-0.15
>>>>>> -enable-kvm -m 3096 \ -smp 4,sockets=4,cores=1,threads=1 \
>>>>>> -uuid c31a9f3e-4161-c53a-339c-5dc36d0497cb -no-user-config -nodefaults \
>>>>>> -chardev
>>>>>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/f17.monitor,server,nowa
>>>>>> i
>>>>>> t \ -mon chardev=charmonitor,id=monitor,mode=control \
>>>>>> -rtc base=utc -no-shutdown \
>>>>>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
>>>>>> -device
>>>>>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0xb,num_queues=4,hotplug=on \
>>>>>> -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 \
>>>>>> -drive file=/vm/f17.img,if=none,id=drive-virtio-disk0,format=qcow2 \
>>>>>> -device
>>>>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=v
>>>>>> i
>>>>>> rtio-disk0,bootindex=1 \ -drive
>>>>>> file=/vm2/f17-kernel.img,if=none,id=drive-virtio-disk1,format=qcow2 \
>>>>>> -device
>>>>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=v
>>>>>> i
>>>>>> rtio-disk1 \ -drive
>>>>>> file=/vm/virtio-scsi/scsi3.img,if=none,id=drive-scsi0-0-2-0,format=raw \
>>>>>> -device
>>>>>> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi0-0-2-0,id
>>>>>> =
>>>>>> scsi0-0-2-0,removable=on \ -drive
>>>>>> file=/vm/virtio-scsi/scsi4.img,if=none,id=drive-scsi0-0-3-0,format=raw \
>>>>>> -device
>>>>>> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi0-0-3-0,id
>>>>>> =
>>>>>> scsi0-0-3-0 \ -drive
>>>>>> file=/vm/virtio-scsi/scsi1.img,if=none,id=drive-scsi0-0-0-0,format=raw \
>>>>>> -device
>>>>>> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id
>>>>>> =
>>>>>> scsi0-0-0-0 \ -drive
>>>>>> file=/vm/virtio-scsi/scsi2.img,if=none,id=drive-scsi0-0-1-0,format=raw \
>>>>>> -device
>>>>>> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-1-0,id
>>>>>> =
>>>>>> scsi0-0-1-0 \ -chardev pty,id=charserial0 -device
>>>>>> isa-serial,chardev=charserial0,id=serial0 \ -chardev
>>>>>> file,id=charserial1,path=/vm/f17.log \
>>>>>> -device isa-serial,chardev=charserial1,id=serial1 \
>>>>>> -device usb-tablet,id=input0 -vga std \
>>>>>> -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 \
>>>>>> -netdev tap,id=hostnet0,vhost=on,queues=4 \
>>>>>> -device
>>>>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:ce:7b:29,bus=pci.0,a
>>>>>> d
>>>>>> dr=0x3 \ -monitor stdio
>>>>>>
>>>>>> I got panic just after booting the system, did nothing, waited for a
>>>>>> while, the guest panicked.
>>>>>>
>>>>>> [ 28.053004] BUG: soft lockup - CPU#1 stuck for 23s! [ip:592]
>>>>>> [ 28.053004] Modules linked in: ip6t_REJECT nf_conntrack_ipv6
>>>>>> nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables uinput
>>>>>> joydev microcode virtio_balloon pcspkr virtio_net i2c_piix4 i2c_core
>>>>>> virtio_scsi virtio_blk floppy [ 28.053004] CPU 1
>>>>>> [ 28.053004] Pid: 592, comm: ip Not tainted 3.8.0-rc1-net+ #3 Bochs
>>>>>> Bochs
>>>>>> [ 28.053004] RIP: 0010:[<ffffffff8137a9ab>] [<ffffffff8137a9ab>]
>>>>>> virtqueue_get_buf+0xb/0x120 [ 28.053004] RSP: 0018:ffff8800bc913550
>>>>>> EFLAGS: 00000246
>>>>>> [ 28.053004] RAX: 0000000000000000 RBX: ffff8800bc49c000 RCX:
>>>>>> ffff8800bc49e000 [ 28.053004] RDX: 0000000000000000 RSI:
>>>>>> ffff8800bc913584 RDI: ffff8800bcfd4000 [ 28.053004] RBP:
>>>>>> ffff8800bc913558 R08: ffff8800bcfd0800 R09: 0000000000000000 [
>>>>>> 28.053004] R10: ffff8800bc49c000 R11: ffff880036cc4de0 R12:
>>>>>> ffff8800bcfd4000 [ 28.053004] R13: ffff8800bc913558 R14:
>>>>>> ffffffff8137ad73 R15: 00000000000200d0 [ 28.053004] FS:
>>>>>> 00007fb27a589740(0000) GS:ffff8800c1480000(0000) knlGS:0000000000000000
>>>>>> [
>>>>>>
>>>>>> 28.053004] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>>>>>
>>>>>> [ 28.053004] CR2: 0000000000640530 CR3: 00000000baeff000 CR4:
>>>>>> 00000000000006e0 [ 28.053004] DR0: 0000000000000000 DR1:
>>>>>> 0000000000000000 DR2: 0000000000000000 [ 28.053004] DR3:
>>>>>> 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [
>>>>>> 28.053004] Process ip (pid: 592, threadinfo ffff8800bc912000, task
>>>>>> ffff880036da2e20) [ 28.053004] Stack:
>>>>>> [ 28.053004] ffff8800bcfd0800 ffff8800bc913638 ffffffffa003e9bb
>>>>>> ffff8800bc913656 [ 28.053004] 0000000100000002 ffff8800c17ebb08
>>>>>> 000000500000ff10 ffffea0002f244c0 [ 28.053004] 0000000200000582
>>>>>> 0000000000000000 0000000000000000 ffffea0002f244c0 [ 28.053004] Call
>>>>>> Trace:
>>>>>> [ 28.053004] [<ffffffffa003e9bb>]
>>>>>> virtnet_send_command.constprop.26+0x24b/0x270 [virtio_net] [
>>>>>> 28.053004]
>>>>>>
>>>>>> [<ffffffff812ed963>] ? sg_init_table+0x23/0x50
>>>>>>
>>>>>> [ 28.053004] [<ffffffffa0040629>] virtnet_set_rx_mode+0x99/0x300
>>>>>> [virtio_net] [ 28.053004] [<ffffffff8152306f>]
>>>>>> __dev_set_rx_mode+0x5f/0xb0
>>>>>> [ 28.053004] [<ffffffff815230ef>] dev_set_rx_mode+0x2f/0x50
>>>>>> [ 28.053004] [<ffffffff815231b7>] __dev_open+0xa7/0xf0
>>>>>> [ 28.053004] [<ffffffff81523461>] __dev_change_flags+0xa1/0x180
>>>>>> [ 28.053004] [<ffffffff815235f8>] dev_change_flags+0x28/0x70
>>>>>> [ 28.053004] [<ffffffff8152ff20>] do_setlink+0x3b0/0xa50
>>>>>> [ 28.053004] [<ffffffff812fb6b1>] ? nla_parse+0x31/0xe0
>>>>>> [ 28.053004] [<ffffffff815325de>] rtnl_newlink+0x36e/0x580
>>>>>> [ 28.053004] [<ffffffff811355cc>] ?
>>>>>> get_page_from_freelist+0x37c/0x730
>>>>>> [ 28.053004] [<ffffffff81531e13>] rtnetlink_rcv_msg+0x113/0x2f0
>>>>>> [ 28.053004] [<ffffffff8117d973>] ?
>>>>>> __kmalloc_node_track_caller+0x63/0x1c0 [ 28.053004]
>>>>>> [<ffffffff8151526b>] ? __alloc_skb+0x8b/0x2a0
>>>>>> [ 28.053004] [<ffffffff81531d00>] ? __rtnl_unlock+0x20/0x20
>>>>>> [ 28.053004] [<ffffffff8154b571>] netlink_rcv_skb+0xb1/0xc0
>>>>>> [ 28.053004] [<ffffffff8152ea05>] rtnetlink_rcv+0x25/0x40
>>>>>> [ 28.053004] [<ffffffff8154ae91>] netlink_unicast+0x1a1/0x220
>>>>>> [ 28.053004] [<ffffffff8154b211>] netlink_sendmsg+0x301/0x3c0
>>>>>> [ 28.053004] [<ffffffff81508530>] sock_sendmsg+0xb0/0xe0
>>>>>> [ 28.053004] [<ffffffff8113a45b>] ? lru_cache_add_lru+0x3b/0x60
>>>>>> [ 28.053004] [<ffffffff811608b7>] ? page_add_new_anon_rmap+0xc7/0x180
>>>>>> [ 28.053004] [<ffffffff81509efc>] __sys_sendmsg+0x3ac/0x3c0
>>>>>> [ 28.053004] [<ffffffff8162e47c>] ? __do_page_fault+0x23c/0x4d0
>>>>>> [ 28.053004] [<ffffffff8115c9ef>] ? do_brk+0x1ff/0x370
>>>>>> [ 28.053004] [<ffffffff8150bec9>] sys_sendmsg+0x49/0x90
>>>>>> [ 28.053004] [<ffffffff81632d59>] system_call_fastpath+0x16/0x1b
>>>>>> [ 28.053004] Code: 04 0f ae f0 48 8b 47 50 5d 0f b7 50 02 66 39 57 64
>>>>>> 0f
>>>>>> 94 c0 c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 41
>>>>>> 54 <53> 80 7f 59 00 48 89 fb 0f 85 90 00 00 00 48 8b 47 50 0f b7 50
>>>>>>
>>>>>>
>>>>>> The QEMU tree I used is git://github.com/jasowang/qemu.git
>>>>> Thanks a lot, will try to reproduce my self tomorrow. From the
>>>>> calltrace, looks like we send a command to a rx/tx queue.
>>>> Right, the virtqueue that will not be used by single queue were
>>>> initialized. Please try to following patch or use the my qemu.git with
>>>> this fix in github.
>>> It's odd, why didn't I get guest panic by using your python start script
>>> this morning?
>>>
>> That's strange, I can reproduce it. Did you try booting a single queue guest
>> under a multiqueue virtio-net?
>>
>> It could be only triggered when you want to boot a single queue guest with
>> queues >= 2. Let's take 2 as an example. Without the patch, all virtqueues
>> will be initialized even if guest don't support multiqueue. So ctrl vq will be
>> 4th but guest think it's 2th. So guest will send the command to a rx/tx queue,
>> so it won't get any response.
> Anyway, with your updated github tree, the guest panic has gone.
Good to know that.
> As you say here, the guest panic is triggered by using single-queue-supported guest
> kernel? But I think my guest kernel was always support multi-queue virtio-net,
> Am I missing something on the guest kernel supporting multi-queue virtio-net ?
I didn't know the steps of how you did the setting of you guest. I
assume your step is:
1) boot a 'legacy' kernel without a multiqueue virtio-net driver
2) install the new kernel with multiqueue support
3) reboot
So, looks like the hang can occurred only in step 1 and only if you
start qemu with queues > 1. If you use queue = 1 in step one, you will
not get the hang. If you still keep the old kernel, you can reproduce it
by booting the old one with queues > 1.
> Thanks,
> Wanlong Gao
>
>> So if you're using the python script to boot a single queue guest with queue =
>> 1, or boot a multiqueue guest. It would not be triggerable.
>>
>> Thanks
>>> Thanks,
>>> Wanlong Gao
>>>
>>>> diff --git a/hw/virtio-net.c b/hw/virtio-net.c
>>>> index 8b4f079..cfd9af1 100644
>>>> --- a/hw/virtio-net.c
>>>> +++ b/hw/virtio-net.c
>> [...]
>>
>
next prev parent reply other threads:[~2013-01-10 9:41 UTC|newest]
Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-12-28 10:31 [PATCH 00/12] Multiqueue virtio-net Jason Wang
2012-12-28 10:31 ` [PATCH 01/12] tap: multiqueue support Jason Wang
2013-01-09 9:56 ` Stefan Hajnoczi
2013-01-09 15:25 ` Jason Wang
2013-01-10 8:32 ` Stefan Hajnoczi
2013-01-10 10:28 ` Stefan Hajnoczi
2013-01-10 13:52 ` Jason Wang
2012-12-28 10:31 ` [PATCH 02/12] net: introduce qemu_get_queue() Jason Wang
2012-12-28 10:31 ` [PATCH 03/12] net: introduce qemu_get_nic() Jason Wang
2012-12-28 10:31 ` [PATCH 04/12] net: intorduce qemu_del_nic() Jason Wang
2012-12-28 10:31 ` [PATCH 05/12] net: multiqueue support Jason Wang
2012-12-28 18:06 ` Blue Swirl
2012-12-28 10:31 ` [PATCH 06/12] vhost: " Jason Wang
2012-12-28 10:31 ` [PATCH 07/12] virtio: introduce virtio_queue_del() Jason Wang
2013-01-08 7:14 ` Michael S. Tsirkin
2013-01-08 9:28 ` Jason Wang
2012-12-28 10:32 ` [PATCH 08/12] virtio: add a queue_index to VirtQueue Jason Wang
2012-12-28 10:32 ` [PATCH 09/12] virtio-net: separate virtqueue from VirtIONet Jason Wang
2012-12-28 10:32 ` [PATCH 10/12] virtio-net: multiqueue support Jason Wang
2012-12-28 17:52 ` Blue Swirl
2013-01-04 5:12 ` Jason Wang
2013-01-04 20:41 ` Blue Swirl
2013-01-08 9:07 ` [Qemu-devel] " Wanlong Gao
2013-01-08 9:29 ` Jason Wang
2013-01-08 9:32 ` [Qemu-devel] " Wanlong Gao
2013-01-08 9:49 ` Wanlong Gao
2013-01-08 9:51 ` Jason Wang
2013-01-08 10:00 ` [Qemu-devel] " Wanlong Gao
2013-01-08 10:14 ` Jason Wang
2013-01-08 11:24 ` [Qemu-devel] " Wanlong Gao
2013-01-09 3:11 ` Jason Wang
2013-01-09 8:23 ` Wanlong Gao
2013-01-09 9:30 ` Jason Wang
2013-01-09 10:01 ` [Qemu-devel] " Wanlong Gao
2013-01-09 15:26 ` Jason Wang
2013-01-10 6:43 ` Jason Wang
2013-01-10 6:49 ` Wanlong Gao
2013-01-10 7:16 ` Jason Wang
2013-01-10 9:06 ` Wanlong Gao
2013-01-10 9:40 ` Jason Wang [this message]
2012-12-28 10:32 ` [PATCH 11/12] virtio-net: migration support for multiqueue Jason Wang
2013-01-08 7:10 ` Michael S. Tsirkin
2013-01-08 9:27 ` Jason Wang
2012-12-28 10:32 ` [PATCH 12/12] virtio-net: compat multiqueue support Jason Wang
2013-01-09 14:29 ` [Qemu-devel] [PATCH 00/12] Multiqueue virtio-net Stefan Hajnoczi
2013-01-09 15:32 ` Michael S. Tsirkin
2013-01-09 15:33 ` Jason Wang
2013-01-10 8:44 ` Stefan Hajnoczi
2013-01-10 9:34 ` [Qemu-devel] " Jason Wang
2013-01-10 11:49 ` Stefan Hajnoczi
2013-01-10 14:15 ` Jason Wang
2013-01-14 19:44 ` Anthony Liguori
2013-01-15 10:12 ` Jason Wang
2013-01-16 15:09 ` Anthony Liguori
2013-01-16 15:19 ` Michael S. Tsirkin
2013-01-16 16:14 ` Anthony Liguori
2013-01-16 16:48 ` Michael S. Tsirkin
2013-01-17 10:31 ` [Qemu-devel] " Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50EE8CA4.5080906@redhat.com \
--to=jasowang@redhat.com \
--cc=aliguori@us.ibm.com \
--cc=gaowanlong@cn.fujitsu.com \
--cc=jwhan@filewood.snu.ac.kr \
--cc=krkumar2@in.ibm.com \
--cc=kvm@vger.kernel.org \
--cc=mprivozn@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rusty@rustcorp.com.au \
--cc=shiyer@redhat.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).