From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2014FC433EF for ; Wed, 6 Oct 2021 17:57:23 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 95BD4610A1 for ; Wed, 6 Oct 2021 17:57:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 95BD4610A1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:33700 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mYBAP-0001va-Lt for qemu-devel@archiver.kernel.org; Wed, 06 Oct 2021 13:57:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51502) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mYAtu-00014L-9w for qemu-devel@nongnu.org; Wed, 06 Oct 2021 13:40:19 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:50181) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mYAtp-0001J2-Rf for qemu-devel@nongnu.org; Wed, 06 Oct 2021 13:40:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633542009; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=cEhmn60HW4/4s8k7jDP10LaJEqz5smRom6KtgD9FX5A=; b=bpuO2qv10NKeeYh1U5lEKV5XUBsZUqTmxKLHr8H2Z4OaYEZ1aZzaoyMXqyUWUIB6r6Cryd PFqcuDt5StgDV7oPqBxLzOI+0FJKEbUjc7Qaj/Gk5B1/X1jn9hbg2/rEmmYLAZe/9gMRv7 La5dJjm2a+KEKxjRFh7WcLKRsIrd9SE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-470-hHbxWpN_O2GUnhwSwHSuyA-1; Wed, 06 Oct 2021 13:40:08 -0400 X-MC-Unique: hHbxWpN_O2GUnhwSwHSuyA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8F06D1007916; Wed, 6 Oct 2021 17:40:07 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.17.87]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2553660877; Wed, 6 Oct 2021 17:40:07 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id 8BDD1220BDB; Wed, 6 Oct 2021 13:40:06 -0400 (EDT) Date: Wed, 6 Oct 2021 13:40:06 -0400 From: Vivek Goyal To: Christophe de Dinechin Subject: Re: [Virtio-fs] [PATCH 06/13] vhost-user-fs: Use helpers to create/cleanup virtqueue Message-ID: References: <20210930153037.1194279-1-vgoyal@redhat.com> <20210930153037.1194279-7-vgoyal@redhat.com> MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Received-SPF: pass client-ip=170.10.133.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -28 X-Spam_score: -2.9 X-Spam_bar: -- X-Spam_report: (-2.9 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.05, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: virtio-fs@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" On Wed, Oct 06, 2021 at 03:35:30PM +0200, Christophe de Dinechin wrote: > > On 2021-09-30 at 11:30 -04, Vivek Goyal wrote... > > Add helpers to create/cleanup virtuqueues and use those helpers. I will > > Typo, virtuqueues -> virtqueues > > Also, while I'm nitpicking, virtqueue could be plural in commit description ;-) Will do. Thanks. :-) Vivek > > > need to reconfigure queues in later patches and using helpers will allow > > reusing the code. > > > > Signed-off-by: Vivek Goyal > > --- > > hw/virtio/vhost-user-fs.c | 87 +++++++++++++++++++++++---------------- > > 1 file changed, 52 insertions(+), 35 deletions(-) > > > > diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c > > index c595957983..d1efbc5b18 100644 > > --- a/hw/virtio/vhost-user-fs.c > > +++ b/hw/virtio/vhost-user-fs.c > > @@ -139,6 +139,55 @@ static void vuf_set_status(VirtIODevice *vdev, uint8_t status) > > } > > } > > > > +static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq) > > +{ > > + /* > > + * Not normally called; it's the daemon that handles the queue; > > + * however virtio's cleanup path can call this. > > + */ > > +} > > + > > +static void vuf_create_vqs(VirtIODevice *vdev) > > +{ > > + VHostUserFS *fs = VHOST_USER_FS(vdev); > > + unsigned int i; > > + > > + /* Hiprio queue */ > > + fs->hiprio_vq = virtio_add_queue(vdev, fs->conf.queue_size, > > + vuf_handle_output); > > + > > + /* Request queues */ > > + fs->req_vqs = g_new(VirtQueue *, fs->conf.num_request_queues); > > + for (i = 0; i < fs->conf.num_request_queues; i++) { > > + fs->req_vqs[i] = virtio_add_queue(vdev, fs->conf.queue_size, > > + vuf_handle_output); > > + } > > + > > + /* 1 high prio queue, plus the number configured */ > > + fs->vhost_dev.nvqs = 1 + fs->conf.num_request_queues; > > + fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs); > > +} > > + > > +static void vuf_cleanup_vqs(VirtIODevice *vdev) > > +{ > > + VHostUserFS *fs = VHOST_USER_FS(vdev); > > + unsigned int i; > > + > > + virtio_delete_queue(fs->hiprio_vq); > > + fs->hiprio_vq = NULL; > > + > > + for (i = 0; i < fs->conf.num_request_queues; i++) { > > + virtio_delete_queue(fs->req_vqs[i]); > > + } > > + > > + g_free(fs->req_vqs); > > + fs->req_vqs = NULL; > > + > > + fs->vhost_dev.nvqs = 0; > > + g_free(fs->vhost_dev.vqs); > > + fs->vhost_dev.vqs = NULL; > > +} > > + > > static uint64_t vuf_get_features(VirtIODevice *vdev, > > uint64_t features, > > Error **errp) > > @@ -148,14 +197,6 @@ static uint64_t vuf_get_features(VirtIODevice *vdev, > > return vhost_get_features(&fs->vhost_dev, user_feature_bits, features); > > } > > > > -static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq) > > -{ > > - /* > > - * Not normally called; it's the daemon that handles the queue; > > - * however virtio's cleanup path can call this. > > - */ > > -} > > - > > static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx, > > bool mask) > > { > > @@ -175,7 +216,6 @@ static void vuf_device_realize(DeviceState *dev, Error **errp) > > { > > VirtIODevice *vdev = VIRTIO_DEVICE(dev); > > VHostUserFS *fs = VHOST_USER_FS(dev); > > - unsigned int i; > > size_t len; > > int ret; > > > > @@ -222,18 +262,7 @@ static void vuf_device_realize(DeviceState *dev, Error **errp) > > virtio_init(vdev, "vhost-user-fs", VIRTIO_ID_FS, > > sizeof(struct virtio_fs_config)); > > > > - /* Hiprio queue */ > > - fs->hiprio_vq = virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output); > > - > > - /* Request queues */ > > - fs->req_vqs = g_new(VirtQueue *, fs->conf.num_request_queues); > > - for (i = 0; i < fs->conf.num_request_queues; i++) { > > - fs->req_vqs[i] = virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output); > > - } > > - > > - /* 1 high prio queue, plus the number configured */ > > - fs->vhost_dev.nvqs = 1 + fs->conf.num_request_queues; > > - fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs); > > + vuf_create_vqs(vdev); > > ret = vhost_dev_init(&fs->vhost_dev, &fs->vhost_user, > > VHOST_BACKEND_TYPE_USER, 0, errp); > > if (ret < 0) { > > @@ -244,13 +273,8 @@ static void vuf_device_realize(DeviceState *dev, Error **errp) > > > > err_virtio: > > vhost_user_cleanup(&fs->vhost_user); > > - virtio_delete_queue(fs->hiprio_vq); > > - for (i = 0; i < fs->conf.num_request_queues; i++) { > > - virtio_delete_queue(fs->req_vqs[i]); > > - } > > - g_free(fs->req_vqs); > > + vuf_cleanup_vqs(vdev); > > virtio_cleanup(vdev); > > - g_free(fs->vhost_dev.vqs); > > return; > > } > > > > @@ -258,7 +282,6 @@ static void vuf_device_unrealize(DeviceState *dev) > > { > > VirtIODevice *vdev = VIRTIO_DEVICE(dev); > > VHostUserFS *fs = VHOST_USER_FS(dev); > > - int i; > > > > /* This will stop vhost backend if appropriate. */ > > vuf_set_status(vdev, 0); > > @@ -267,14 +290,8 @@ static void vuf_device_unrealize(DeviceState *dev) > > > > vhost_user_cleanup(&fs->vhost_user); > > > > - virtio_delete_queue(fs->hiprio_vq); > > - for (i = 0; i < fs->conf.num_request_queues; i++) { > > - virtio_delete_queue(fs->req_vqs[i]); > > - } > > - g_free(fs->req_vqs); > > + vuf_cleanup_vqs(vdev); > > virtio_cleanup(vdev); > > - g_free(fs->vhost_dev.vqs); > > - fs->vhost_dev.vqs = NULL; > > } > > > > static const VMStateDescription vuf_vmstate = { > > > -- > Cheers, > Christophe de Dinechin (IRC c3d) > From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Wed, 6 Oct 2021 13:40:06 -0400 From: Vivek Goyal Message-ID: References: <20210930153037.1194279-1-vgoyal@redhat.com> <20210930153037.1194279-7-vgoyal@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [Virtio-fs] [PATCH 06/13] vhost-user-fs: Use helpers to create/cleanup virtqueue List-Id: Development discussions about virtio-fs List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Christophe de Dinechin Cc: virtio-fs@redhat.com, qemu-devel@nongnu.org, miklos@szeredi.hu On Wed, Oct 06, 2021 at 03:35:30PM +0200, Christophe de Dinechin wrote: > > On 2021-09-30 at 11:30 -04, Vivek Goyal wrote... > > Add helpers to create/cleanup virtuqueues and use those helpers. I will > > Typo, virtuqueues -> virtqueues > > Also, while I'm nitpicking, virtqueue could be plural in commit description ;-) Will do. Thanks. :-) Vivek > > > need to reconfigure queues in later patches and using helpers will allow > > reusing the code. > > > > Signed-off-by: Vivek Goyal > > --- > > hw/virtio/vhost-user-fs.c | 87 +++++++++++++++++++++++---------------- > > 1 file changed, 52 insertions(+), 35 deletions(-) > > > > diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c > > index c595957983..d1efbc5b18 100644 > > --- a/hw/virtio/vhost-user-fs.c > > +++ b/hw/virtio/vhost-user-fs.c > > @@ -139,6 +139,55 @@ static void vuf_set_status(VirtIODevice *vdev, uint8_t status) > > } > > } > > > > +static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq) > > +{ > > + /* > > + * Not normally called; it's the daemon that handles the queue; > > + * however virtio's cleanup path can call this. > > + */ > > +} > > + > > +static void vuf_create_vqs(VirtIODevice *vdev) > > +{ > > + VHostUserFS *fs = VHOST_USER_FS(vdev); > > + unsigned int i; > > + > > + /* Hiprio queue */ > > + fs->hiprio_vq = virtio_add_queue(vdev, fs->conf.queue_size, > > + vuf_handle_output); > > + > > + /* Request queues */ > > + fs->req_vqs = g_new(VirtQueue *, fs->conf.num_request_queues); > > + for (i = 0; i < fs->conf.num_request_queues; i++) { > > + fs->req_vqs[i] = virtio_add_queue(vdev, fs->conf.queue_size, > > + vuf_handle_output); > > + } > > + > > + /* 1 high prio queue, plus the number configured */ > > + fs->vhost_dev.nvqs = 1 + fs->conf.num_request_queues; > > + fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs); > > +} > > + > > +static void vuf_cleanup_vqs(VirtIODevice *vdev) > > +{ > > + VHostUserFS *fs = VHOST_USER_FS(vdev); > > + unsigned int i; > > + > > + virtio_delete_queue(fs->hiprio_vq); > > + fs->hiprio_vq = NULL; > > + > > + for (i = 0; i < fs->conf.num_request_queues; i++) { > > + virtio_delete_queue(fs->req_vqs[i]); > > + } > > + > > + g_free(fs->req_vqs); > > + fs->req_vqs = NULL; > > + > > + fs->vhost_dev.nvqs = 0; > > + g_free(fs->vhost_dev.vqs); > > + fs->vhost_dev.vqs = NULL; > > +} > > + > > static uint64_t vuf_get_features(VirtIODevice *vdev, > > uint64_t features, > > Error **errp) > > @@ -148,14 +197,6 @@ static uint64_t vuf_get_features(VirtIODevice *vdev, > > return vhost_get_features(&fs->vhost_dev, user_feature_bits, features); > > } > > > > -static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq) > > -{ > > - /* > > - * Not normally called; it's the daemon that handles the queue; > > - * however virtio's cleanup path can call this. > > - */ > > -} > > - > > static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx, > > bool mask) > > { > > @@ -175,7 +216,6 @@ static void vuf_device_realize(DeviceState *dev, Error **errp) > > { > > VirtIODevice *vdev = VIRTIO_DEVICE(dev); > > VHostUserFS *fs = VHOST_USER_FS(dev); > > - unsigned int i; > > size_t len; > > int ret; > > > > @@ -222,18 +262,7 @@ static void vuf_device_realize(DeviceState *dev, Error **errp) > > virtio_init(vdev, "vhost-user-fs", VIRTIO_ID_FS, > > sizeof(struct virtio_fs_config)); > > > > - /* Hiprio queue */ > > - fs->hiprio_vq = virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output); > > - > > - /* Request queues */ > > - fs->req_vqs = g_new(VirtQueue *, fs->conf.num_request_queues); > > - for (i = 0; i < fs->conf.num_request_queues; i++) { > > - fs->req_vqs[i] = virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output); > > - } > > - > > - /* 1 high prio queue, plus the number configured */ > > - fs->vhost_dev.nvqs = 1 + fs->conf.num_request_queues; > > - fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs); > > + vuf_create_vqs(vdev); > > ret = vhost_dev_init(&fs->vhost_dev, &fs->vhost_user, > > VHOST_BACKEND_TYPE_USER, 0, errp); > > if (ret < 0) { > > @@ -244,13 +273,8 @@ static void vuf_device_realize(DeviceState *dev, Error **errp) > > > > err_virtio: > > vhost_user_cleanup(&fs->vhost_user); > > - virtio_delete_queue(fs->hiprio_vq); > > - for (i = 0; i < fs->conf.num_request_queues; i++) { > > - virtio_delete_queue(fs->req_vqs[i]); > > - } > > - g_free(fs->req_vqs); > > + vuf_cleanup_vqs(vdev); > > virtio_cleanup(vdev); > > - g_free(fs->vhost_dev.vqs); > > return; > > } > > > > @@ -258,7 +282,6 @@ static void vuf_device_unrealize(DeviceState *dev) > > { > > VirtIODevice *vdev = VIRTIO_DEVICE(dev); > > VHostUserFS *fs = VHOST_USER_FS(dev); > > - int i; > > > > /* This will stop vhost backend if appropriate. */ > > vuf_set_status(vdev, 0); > > @@ -267,14 +290,8 @@ static void vuf_device_unrealize(DeviceState *dev) > > > > vhost_user_cleanup(&fs->vhost_user); > > > > - virtio_delete_queue(fs->hiprio_vq); > > - for (i = 0; i < fs->conf.num_request_queues; i++) { > > - virtio_delete_queue(fs->req_vqs[i]); > > - } > > - g_free(fs->req_vqs); > > + vuf_cleanup_vqs(vdev); > > virtio_cleanup(vdev); > > - g_free(fs->vhost_dev.vqs); > > - fs->vhost_dev.vqs = NULL; > > } > > > > static const VMStateDescription vuf_vmstate = { > > > -- > Cheers, > Christophe de Dinechin (IRC c3d) >