qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "AIERPATIJIANG1 [艾尔帕提江·阿布都赛买提]" <AIERPATIJIANG1@kingsoft.com>
To: Laurent Vivier <lvivier@redhat.com>,
	"stefanha@redhat.com" <stefanha@redhat.com>
Cc: "amit@kernel.org" <amit@kernel.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"mst@redhat.com" <mst@redhat.com>
Subject: Re: [PATCH] hw/char/virtio-serial-bus: fix: Unpop throttled VirtQueueElement to queue before discard vq data
Date: Wed, 4 Aug 2021 08:34:46 +0000	[thread overview]
Message-ID: <A090670A-BD6D-452A-BE91-62783EB53C51@kingsoft.com> (raw)
In-Reply-To: <abe0d719-20e2-ebf3-c61a-c1778a461d5c@redhat.com>



在 2021/8/2 下午11:01,“Laurent Vivier”<lvivier@redhat.com> 写入:

    On 30/07/2021 03:58, AIERPATIJIANG1 [艾尔帕提江·阿布都赛买提] wrote:
    >> Ports enter a "throttled" state when writing to the chardev would block.
    >> The current output VirtQueueElement is kept around until the chardev
    >> becomes writable again.
    >> 
    >> Because closing the virtio serial device does not reset the queue, we cannot
    >> directly discard this element,  otherwise the control variables of the front
    >> and back ends of the queue are inconsistent such as used_index. We should unpop the
    >> VirtQueueElement to queue, let discard_vq_data process it.
    >> 
    >> The test environment:
    >> kernel: linux-5.12
    >> Qemu command:
    >> Qemu-system-x86 -machine pc,accel=kvm \
    >>     -cpu host,host-phys-bits \
    >>     -smp 4 \
    >>     -m 4G \
    >>     -kernel ./kernel \
    >>     -display none \
    >>     -nodefaults \
    >>     -serial mon:stdio \
    >>     -append "panic=1 no_timer_check noreplace-smp rootflags=data=ordered rootfstype=ext4 console=ttyS0 reboot=k root=/dev/vda1 rw" \
    >>     -drive id=os,file=./disk,if=none \
    >>     -device virtio-blk-pci,drive=os \
    >>     -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \
    >>     -chardev socket,id=charchannel0,path=/tmp/char-dev-test,server,nowait \
    >>   -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
    >> 
    >> full up virtio queue after VM started:
    >> Cat /large-file > /dev/vport1p1
    >> 
    >> Host side:
    >> Open and close character device sockets repeatedly
    >> 
    >> After awhile we can’t write any request to /dev/vport1p1 at VM side, VM kernel soft lockup at drivers/char/virtio_console.c: __send_to_port
    >> 
    >> 
    >> Signed-off-by: Arafatms <aierpatijiang1@kingsoft.com>
    >> 
    >> diff --git a/hw/char/virtio-serial-bus.c b/hw/char/virtio-serial-bus.c
    >> index dd6bc27b3b..36236defdf 100644
    >> --- a/hw/char/virtio-serial-bus.c
    >> +++ b/hw/char/virtio-serial-bus.c
    >> @@ -150,8 +150,12 @@ static void discard_vq_data(VirtQueue *vq, VirtIODevice *vdev)
    >> 
    >> static void discard_throttle_data(VirtIOSerialPort *port)
    >> {
    >> +    if (!virtio_queue_ready(port->ovq)) {
    >> +        return;
    >> +    }

    >Why do we need to check virtio_queue_ready() now?

	I think we should check virtio_queue_ready before every operation of virt_queues.

    >> +
    >>      if (port->elem) {
    >> -        virtqueue_detach_element(port->ovq, port->elem, 0);
    >> +        virtqueue_unpop(port->ovq, port->elem, 0);
    >>          g_free(port->elem);
    >>          port->elem = NULL;
    >>      }
    >> 

    > It seems the problem you report can only happen when the port is closed from the host side
    > (because from the guest side I guess the cleanup is done by the guest driver).

	Yes, this problem can only happen when the port is closed from the host side. 

    > Stefan, you have introduced the function discard_throttle_data() in:

    >  d4c19cdeeb2f ("virtio-serial: add missing virtio_detach_element() call")

    > do you remember why you prefered to use virtqueue_detach_element() rather than
    > virtqueue_unpop() (or virtqueue_discard() at this time, see 27e57efe32f5 ("virtio: rename
    > virtqueue_discard to virtqueue_unpop"))

    > Thanks,
    > Laurent

	I observed all the situations where discard_throttle_data is called, The result only points to one situation, that is, the virtio device is
	damaged(driver will reset all queues). About virtio-serial device, close devise from host side doesn't mean the driver reset all queues,
	so we can't just detach the throttled elem.

	Thanks,
	Arafatms



  reply	other threads:[~2021-08-04  8:38 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-21  9:12 [PATCH] hw/char/virtio-serial-bus: fix: Unpop throttled VirtQueueElement to queue before discard vq data AIERPATIJIANG1 [艾尔帕提江·阿布都赛买提]
2021-07-27  2:21 ` AIERPATIJIANG1 [艾尔帕提江·阿布都赛买提]
2021-07-30  1:58   ` AIERPATIJIANG1 [艾尔帕提江·阿布都赛买提]
2021-08-02 10:00     ` Stefan Hajnoczi
2021-08-02 15:01     ` Laurent Vivier
2021-08-04  8:34       ` AIERPATIJIANG1 [艾尔帕提江·阿布都赛买提] [this message]
2021-08-04 15:34       ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A090670A-BD6D-452A-BE91-62783EB53C51@kingsoft.com \
    --to=aierpatijiang1@kingsoft.com \
    --cc=amit@kernel.org \
    --cc=lvivier@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).