From: xuan <1888818@bugs.launchpad.net>
To: qemu-devel@nongnu.org
Subject: [Bug 1888818] Re: Multi-queue vhost-user fails to reconnect with qemu version >=4.2
Date: Sat, 08 May 2021 05:26:54 -0000 [thread overview]
Message-ID: <162045161521.9721.11517419123272382349.launchpad@chaenomeles.canonical.com> (raw)
In-Reply-To: 159558183424.11837.7512442025195132206.malonedeb@wampee.canonical.com
** Changed in: qemu
Status: Incomplete => New
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1888818
Title:
Multi-queue vhost-user fails to reconnect with qemu version >=4.2
Status in QEMU:
New
Bug description:
Test Environment:
DPDK version: DPDK v20.08
Other software versions: qemu4.2.0, qemu5.0.0.
OS: Linux 4.15.0-20-generic
Compiler: gcc (Ubuntu 7.3.0-16ubuntu3) 8.4.0
Hardware platform: Purley.
Test Setup
Steps to reproduce
List the steps to reproduce the issue.
Test flow
=========
1. Launch vhost-user testpmd as port0 with 2 queues:
./x86_64-native-linuxapp-gcc/app/testpmd -l 2-4 -n 4 \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=1' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2
testpmd>start
3. Launch qemu with virtio-net:
taskset -c 13 \
qemu-system-x86_64 -name us-vhost-vm1 \
-cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem \
-mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -netdev user,id=yinan,hostfwd=tcp:127.0.0.1:6005-:22 -device e1000,netdev=yinan \
-smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img \
-chardev socket,id=char0,path=./vhost-net,server \
-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \
-device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on,mq=on,vectors=15 \
-vnc :10 -daemonize
6. Quit testpmd and restart vhost-user :
testpmd>quit
./x86_64-native-linuxapp-gcc/app/testpmd -l 2-4 -n 4 \
--file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=1' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2
Expected Result:
After the vhost-user is killed then re-launched, the virtio-net can connect back to vhost-user again.
Actual Result:
Vhost-user relaunch failed with continous log printed"VHOST_CONFIG: Processing VHOST_USER_SET_FEATURES failed.
Analysis:
This is a regression bug, bad commit: c6beefd674f
When vhost-user quits, QEMU doesnot save acked features for each virtio-net after vhost-user quits. When vhost-user reconnects to QEMU, QEMU sends two different features(one is the true acked feature while the another is 0x40000000) to vhost-user successively which causing vhost-user exits abnormally.
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1888818/+subscriptions
next prev parent reply other threads:[~2021-05-08 5:36 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-24 9:10 [Bug 1888818] [NEW] Multi-queue vhost-user fails to reconnect with qemu version >=4.2 xuan
2021-05-07 8:24 ` [Bug 1888818] " Thomas Huth
2021-05-08 5:26 ` xuan [this message]
2021-05-12 11:02 ` Thomas Huth
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=162045161521.9721.11517419123272382349.launchpad@chaenomeles.canonical.com \
--to=1888818@bugs.launchpad.net \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).