From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D4C0C433E6 for ; Mon, 8 Mar 2021 07:02:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3911651EF for ; Mon, 8 Mar 2021 07:02:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229711AbhCHHCM (ORCPT ); Mon, 8 Mar 2021 02:02:12 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:43898 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235102AbhCHHCK (ORCPT ); Mon, 8 Mar 2021 02:02:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1615186930; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yhCvusN/okOy5hGR/lyFJR9D8I6idT/Y3re+gRmxEXY=; b=CYsh9B6/sgiq1qCkaIrLjn/mkkwwhNxh++BsbaOR5p3LnBeS+ZQiuvGdvFoBQzKOEElOuo WnzZDbaDZI2mxQ1j4a9dqSyd67RmQjffjPOpPWqTQ5NhtIJRwu5ioraJtQOg6Hclvs4/b5 PwoP2Gx1qCyIa8U0tmehQfFMqlHT9Sc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-35-IfK6N3WMNwmmcHF2qrDwuw-1; Mon, 08 Mar 2021 02:02:06 -0500 X-MC-Unique: IfK6N3WMNwmmcHF2qrDwuw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D447D881280; Mon, 8 Mar 2021 07:02:03 +0000 (UTC) Received: from wangxiaodeMacBook-Air.local (ovpn-13-193.pek2.redhat.com [10.72.13.193]) by smtp.corp.redhat.com (Postfix) with ESMTP id 46D4410013D7; Mon, 8 Mar 2021 07:01:48 +0000 (UTC) Subject: Re: [RFC v4 10/11] vduse: Introduce a workqueue for irq injection To: Yongji Xie Cc: "Michael S. Tsirkin" , Stefan Hajnoczi , Stefano Garzarella , Parav Pandit , Bob Liu , Christoph Hellwig , Randy Dunlap , Matthew Wilcox , viro@zeniv.linux.org.uk, Jens Axboe , bcrl@kvack.org, Jonathan Corbet , virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-aio@kvack.org, linux-fsdevel@vger.kernel.org References: <20210223115048.435-1-xieyongji@bytedance.com> <20210223115048.435-11-xieyongji@bytedance.com> <2d3418d9-856c-37ee-7614-af5b721becd7@redhat.com> <44c21bf4-874d-24c9-334b-053c54e8422e@redhat.com> From: Jason Wang Message-ID: Date: Mon, 8 Mar 2021 15:01:46 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:78.0) Gecko/20100101 Thunderbird/78.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-GB X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On 2021/3/8 12:50 下午, Yongji Xie wrote: > On Mon, Mar 8, 2021 at 11:04 AM Jason Wang wrote: >> >> On 2021/3/5 4:12 下午, Yongji Xie wrote: >>> On Fri, Mar 5, 2021 at 3:37 PM Jason Wang wrote: >>>> On 2021/3/5 3:27 下午, Yongji Xie wrote: >>>>> On Fri, Mar 5, 2021 at 3:01 PM Jason Wang wrote: >>>>>> On 2021/3/5 2:36 下午, Yongji Xie wrote: >>>>>>> On Fri, Mar 5, 2021 at 11:42 AM Jason Wang wrote: >>>>>>>> On 2021/3/5 11:30 上午, Yongji Xie wrote: >>>>>>>>> On Fri, Mar 5, 2021 at 11:05 AM Jason Wang wrote: >>>>>>>>>> On 2021/3/4 4:58 下午, Yongji Xie wrote: >>>>>>>>>>> On Thu, Mar 4, 2021 at 2:59 PM Jason Wang wrote: >>>>>>>>>>>> On 2021/2/23 7:50 下午, Xie Yongji wrote: >>>>>>>>>>>>> This patch introduces a workqueue to support injecting >>>>>>>>>>>>> virtqueue's interrupt asynchronously. This is mainly >>>>>>>>>>>>> for performance considerations which makes sure the push() >>>>>>>>>>>>> and pop() for used vring can be asynchronous. >>>>>>>>>>>> Do you have pref numbers for this patch? >>>>>>>>>>>> >>>>>>>>>>> No, I can do some tests for it if needed. >>>>>>>>>>> >>>>>>>>>>> Another problem is the VIRTIO_RING_F_EVENT_IDX feature will be useless >>>>>>>>>>> if we call irq callback in ioctl context. Something like: >>>>>>>>>>> >>>>>>>>>>> virtqueue_push(); >>>>>>>>>>> virtio_notify(); >>>>>>>>>>> ioctl() >>>>>>>>>>> ------------------------------------------------- >>>>>>>>>>> irq_cb() >>>>>>>>>>> virtqueue_get_buf() >>>>>>>>>>> >>>>>>>>>>> The used vring is always empty each time we call virtqueue_push() in >>>>>>>>>>> userspace. Not sure if it is what we expected. >>>>>>>>>> I'm not sure I get the issue. >>>>>>>>>> >>>>>>>>>> THe used ring should be filled by virtqueue_push() which is done by >>>>>>>>>> userspace before? >>>>>>>>>> >>>>>>>>> After userspace call virtqueue_push(), it always call virtio_notify() >>>>>>>>> immediately. In traditional VM (vhost-vdpa) cases, virtio_notify() >>>>>>>>> will inject an irq to VM and return, then vcpu thread will call >>>>>>>>> interrupt handler. But in container (virtio-vdpa) cases, >>>>>>>>> virtio_notify() will call interrupt handler directly. So it looks like >>>>>>>>> we have to optimize the virtio-vdpa cases. But one problem is we don't >>>>>>>>> know whether we are in the VM user case or container user case. >>>>>>>> Yes, but I still don't get why used ring is empty after the ioctl()? >>>>>>>> Used ring does not use bounce page so it should be visible to the kernel >>>>>>>> driver. What did I miss :) ? >>>>>>>> >>>>>>> Sorry, I'm not saying the kernel can't see the correct used vring. I >>>>>>> mean the kernel will consume the used vring in the ioctl context >>>>>>> directly in the virtio-vdpa case. In userspace's view, that means >>>>>>> virtqueue_push() is used vring's producer and virtio_notify() is used >>>>>>> vring's consumer. They will be called one by one in one thread rather >>>>>>> than different threads, which looks odd and has a bad effect on >>>>>>> performance. >>>>>> Yes, that's why we need a workqueue (WQ_UNBOUND you used). Or do you >>>>>> want to squash this patch into patch 8? >>>>>> >>>>>> So I think we can see obvious difference when virtio-vdpa is used. >>>>>> >>>>> But it looks like we don't need this workqueue in vhost-vdpa cases. >>>>> Any suggestions? >>>> I haven't had a deep thought. But I feel we can solve this by using the >>>> irq bypass manager (or something similar). Then we don't need it to be >>>> relayed via workqueue and vdpa. But I'm not sure how hard it will be. >>>> >>> Or let vdpa bus drivers give us some information? >> >> This kind of 'type' is proposed in the early RFC of vDPA series. One >> issue is that at device level, we should not differ virtio from vhost, >> so if we introduce that, it might encourge people to design a device >> that is dedicated to vhost or virtio which might not be good. >> >> But we can re-visit this when necessary. >> > OK, I see. How about adding some information in ops.set_vq_cb()? I'm not sure I get this, maybe you can explain a little bit more? Thanks > > Thanks, > Yongji >