From: Jason Wang <jasowang@redhat.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
Shannon Nelson <shannon.nelson@amd.com>,
xuanzhuo@linux.alibaba.com, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com, davem@davemloft.net
Subject: Re: [PATCH net-next v4 2/2] virtio-net: add cond_resched() to the command waiting loop
Date: Mon, 24 Jul 2023 14:52:05 +0800 [thread overview]
Message-ID: <CACGkMEv1B9xFE7-LrLQC3FbH6CxTZC+toHXoLHFvJWn6wgobrA@mail.gmail.com> (raw)
In-Reply-To: <e3490755-35ac-89b4-b0fa-b63720a9a5c9@redhat.com>
On Sat, Jul 22, 2023 at 4:18 AM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
>
>
> On 7/21/23 17:10, Michael S. Tsirkin wrote:
> > On Fri, Jul 21, 2023 at 04:58:04PM +0200, Maxime Coquelin wrote:
> >>
> >>
> >> On 7/21/23 16:45, Michael S. Tsirkin wrote:
> >>> On Fri, Jul 21, 2023 at 04:37:00PM +0200, Maxime Coquelin wrote:
> >>>>
> >>>>
> >>>> On 7/20/23 23:02, Michael S. Tsirkin wrote:
> >>>>> On Thu, Jul 20, 2023 at 01:26:20PM -0700, Shannon Nelson wrote:
> >>>>>> On 7/20/23 1:38 AM, Jason Wang wrote:
> >>>>>>>
> >>>>>>> Adding cond_resched() to the command waiting loop for a better
> >>>>>>> co-operation with the scheduler. This allows to give CPU a breath to
> >>>>>>> run other task(workqueue) instead of busy looping when preemption is
> >>>>>>> not allowed on a device whose CVQ might be slow.
> >>>>>>>
> >>>>>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
> >>>>>>
> >>>>>> This still leaves hung processes, but at least it doesn't pin the CPU any
> >>>>>> more. Thanks.
> >>>>>> Reviewed-by: Shannon Nelson <shannon.nelson@amd.com>
> >>>>>>
> >>>>>
> >>>>> I'd like to see a full solution
> >>>>> 1- block until interrupt
I remember in previous versions, you worried about the extra MSI
vector. (Maybe I was wrong).
> >>>>
> >>>> Would it make sense to also have a timeout?
> >>>> And when timeout expires, set FAILED bit in device status?
> >>>
> >>> virtio spec does not set any limits on the timing of vq
> >>> processing.
> >>
> >> Indeed, but I thought the driver could decide it is too long for it.
> >>
> >> The issue is we keep waiting with rtnl locked, it can quickly make the
> >> system unusable.
> >
> > if this is a problem we should find a way not to keep rtnl
> > locked indefinitely.
Any ideas on this direction? Simply dropping rtnl during the busy loop
will result in a lot of races. This seems to require non-trivial
changes in the networking core.
>
> From the tests I have done, I think it is. With OVS, a reconfiguration
> is performed when the VDUSE device is added, and when a MLX5 device is
> in the same bridge, it ends up doing an ioctl() that tries to take the
> rtnl lock. In this configuration, it is not possible to kill OVS because
> it is stuck trying to acquire rtnl lock for mlx5 that is held by virtio-
> net.
Yeah, basically, any RTNL users would be blocked forever.
And the infinite loop has other side effects like it blocks the freezer to work.
To summarize, there are three issues
1) busy polling
2) breaks freezer
3) hold RTNL during the loop
Solving 3 may help somehow for 2 e.g some pm routine e.g wireguard or
even virtnet_restore() itself may try to hold the lock.
>
> >
> >>>>> 2- still handle surprise removal correctly by waking in that case
This is basically what version 1 did?
https://lore.kernel.org/lkml/6026e801-6fda-fee9-a69b-d06a80368621@redhat.com/t/
Thanks
> >>>>>
> >>>>>
> >>>>>
> >>>>>>> ---
> >>>>>>> drivers/net/virtio_net.c | 4 +++-
> >>>>>>> 1 file changed, 3 insertions(+), 1 deletion(-)
> >>>>>>>
> >>>>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> >>>>>>> index 9f3b1d6ac33d..e7533f29b219 100644
> >>>>>>> --- a/drivers/net/virtio_net.c
> >>>>>>> +++ b/drivers/net/virtio_net.c
> >>>>>>> @@ -2314,8 +2314,10 @@ static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd,
> >>>>>>> * into the hypervisor, so the request should be handled immediately.
> >>>>>>> */
> >>>>>>> while (!virtqueue_get_buf(vi->cvq, &tmp) &&
> >>>>>>> - !virtqueue_is_broken(vi->cvq))
> >>>>>>> + !virtqueue_is_broken(vi->cvq)) {
> >>>>>>> + cond_resched();
> >>>>>>> cpu_relax();
> >>>>>>> + }
> >>>>>>>
> >>>>>>> return vi->ctrl->status == VIRTIO_NET_OK;
> >>>>>>> }
> >>>>>>> --
> >>>>>>> 2.39.3
> >>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> Virtualization mailing list
> >>>>>>> Virtualization@lists.linux-foundation.org
> >>>>>>> https://lists.linuxfoundation.org/mailman/listinfo/virtualization
> >>>>>
> >>>
> >
>
next prev parent reply other threads:[~2023-07-24 6:52 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-20 8:38 [PATCH net-next v4 0/2] virtio-net: don't busy poll for cvq command Jason Wang
2023-07-20 8:38 ` [PATCH net-next v4 1/2] virtio-net: convert rx mode setting to use workqueue Jason Wang
2023-07-20 20:25 ` Shannon Nelson
2023-07-20 8:38 ` [PATCH net-next v4 2/2] virtio-net: add cond_resched() to the command waiting loop Jason Wang
2023-07-20 15:31 ` Shannon Nelson
2023-07-20 20:58 ` Michael S. Tsirkin
2023-07-20 20:26 ` Shannon Nelson
2023-07-20 21:02 ` Michael S. Tsirkin
2023-07-21 14:37 ` Maxime Coquelin
2023-07-21 14:45 ` Michael S. Tsirkin
2023-07-21 14:58 ` Maxime Coquelin
2023-07-21 15:10 ` Michael S. Tsirkin
2023-07-21 20:18 ` Maxime Coquelin
2023-07-24 6:46 ` Michael S. Tsirkin
2023-07-24 6:52 ` Jason Wang
2023-07-24 7:18 ` Michael S. Tsirkin
2023-07-25 3:03 ` Jason Wang
2024-02-22 19:21 ` Michael S. Tsirkin
2024-02-26 5:08 ` Jason Wang
2023-10-05 19:35 ` Feng Liu
2023-10-08 5:27 ` Jason Wang
2023-07-24 6:52 ` Jason Wang [this message]
2023-07-24 7:17 ` Michael S. Tsirkin
2023-07-25 3:07 ` Jason Wang
2023-07-25 7:36 ` Michael S. Tsirkin
2023-07-26 1:55 ` Jason Wang
2023-07-26 11:37 ` Michael S. Tsirkin
2023-07-27 6:03 ` Jason Wang
2023-07-27 6:10 ` Michael S. Tsirkin
2023-07-27 8:59 ` Jason Wang
2023-07-27 9:46 ` Michael S. Tsirkin
2023-07-31 6:30 ` Jason Wang
2023-08-08 2:30 ` Jason Wang
2023-08-10 19:41 ` Michael S. Tsirkin
2023-08-11 2:23 ` Jason Wang
2023-08-11 5:42 ` Michael S. Tsirkin
2023-08-11 9:18 ` Jason Wang
2023-08-11 9:21 ` Michael S. Tsirkin
2023-08-11 9:43 ` Jason Wang
2023-08-11 9:51 ` Michael S. Tsirkin
2023-08-11 9:54 ` Jason Wang
2023-08-11 10:12 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CACGkMEv1B9xFE7-LrLQC3FbH6CxTZC+toHXoLHFvJWn6wgobrA@mail.gmail.com \
--to=jasowang@redhat.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maxime.coquelin@redhat.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=shannon.nelson@amd.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).