All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Shunsuke Mie <mie@igel.co.jp>
Cc: kvm@vger.kernel.org, netdev@vger.kernel.org,
	Rusty Russell <rusty@rustcorp.com.au>,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Subject: Re: [RFC PATCH 4/9] vringh: unify the APIs for all accessors
Date: Tue, 27 Dec 2022 02:04:18 -0500	[thread overview]
Message-ID: <20221227020007-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20221227022528.609839-5-mie@igel.co.jp>

On Tue, Dec 27, 2022 at 11:25:26AM +0900, Shunsuke Mie wrote:
> Each vringh memory accessors that are for user, kern and iotlb has own
> interfaces that calls common code. But some codes are duplicated and that
> becomes loss extendability.
> 
> Introduce a struct vringh_ops and provide a common APIs for all accessors.
> It can bee easily extended vringh code for new memory accessor and
> simplified a caller code.
> 
> Signed-off-by: Shunsuke Mie <mie@igel.co.jp>
> ---
>  drivers/vhost/vringh.c | 667 +++++++++++------------------------------
>  include/linux/vringh.h | 100 +++---
>  2 files changed, 225 insertions(+), 542 deletions(-)
> 
> diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
> index aa3cd27d2384..ebfd3644a1a3 100644
> --- a/drivers/vhost/vringh.c
> +++ b/drivers/vhost/vringh.c
> @@ -35,15 +35,12 @@ static __printf(1,2) __cold void vringh_bad(const char *fmt, ...)
>  }
>  
>  /* Returns vring->num if empty, -ve on error. */
> -static inline int __vringh_get_head(const struct vringh *vrh,
> -				    int (*getu16)(const struct vringh *vrh,
> -						  u16 *val, const __virtio16 *p),
> -				    u16 *last_avail_idx)
> +static inline int __vringh_get_head(const struct vringh *vrh, u16 *last_avail_idx)
>  {
>  	u16 avail_idx, i, head;
>  	int err;
>  
> -	err = getu16(vrh, &avail_idx, &vrh->vring.avail->idx);
> +	err = vrh->ops.getu16(vrh, &avail_idx, &vrh->vring.avail->idx);
>  	if (err) {
>  		vringh_bad("Failed to access avail idx at %p",
>  			   &vrh->vring.avail->idx);

I like that this patch removes more lines of code than it adds.

However one of the design points of vringh abstractions is that they were
carefully written to be very low overhead.
This is why we are passing function pointers to inline functions -
compiler can optimize that out.

I think that introducing ops indirect functions calls here is going to break
these assumptions and hurt performance.
Unless compiler can somehow figure it out and optimize?
I don't see how it's possible with ops pointer in memory
but maybe I'm wrong.

Was any effort taken to test effect of these patches on performance?

Thanks!


_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

WARNING: multiple messages have this Message-ID (diff)
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Shunsuke Mie <mie@igel.co.jp>
Cc: Jason Wang <jasowang@redhat.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 4/9] vringh: unify the APIs for all accessors
Date: Tue, 27 Dec 2022 02:04:18 -0500	[thread overview]
Message-ID: <20221227020007-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20221227022528.609839-5-mie@igel.co.jp>

On Tue, Dec 27, 2022 at 11:25:26AM +0900, Shunsuke Mie wrote:
> Each vringh memory accessors that are for user, kern and iotlb has own
> interfaces that calls common code. But some codes are duplicated and that
> becomes loss extendability.
> 
> Introduce a struct vringh_ops and provide a common APIs for all accessors.
> It can bee easily extended vringh code for new memory accessor and
> simplified a caller code.
> 
> Signed-off-by: Shunsuke Mie <mie@igel.co.jp>
> ---
>  drivers/vhost/vringh.c | 667 +++++++++++------------------------------
>  include/linux/vringh.h | 100 +++---
>  2 files changed, 225 insertions(+), 542 deletions(-)
> 
> diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
> index aa3cd27d2384..ebfd3644a1a3 100644
> --- a/drivers/vhost/vringh.c
> +++ b/drivers/vhost/vringh.c
> @@ -35,15 +35,12 @@ static __printf(1,2) __cold void vringh_bad(const char *fmt, ...)
>  }
>  
>  /* Returns vring->num if empty, -ve on error. */
> -static inline int __vringh_get_head(const struct vringh *vrh,
> -				    int (*getu16)(const struct vringh *vrh,
> -						  u16 *val, const __virtio16 *p),
> -				    u16 *last_avail_idx)
> +static inline int __vringh_get_head(const struct vringh *vrh, u16 *last_avail_idx)
>  {
>  	u16 avail_idx, i, head;
>  	int err;
>  
> -	err = getu16(vrh, &avail_idx, &vrh->vring.avail->idx);
> +	err = vrh->ops.getu16(vrh, &avail_idx, &vrh->vring.avail->idx);
>  	if (err) {
>  		vringh_bad("Failed to access avail idx at %p",
>  			   &vrh->vring.avail->idx);

I like that this patch removes more lines of code than it adds.

However one of the design points of vringh abstractions is that they were
carefully written to be very low overhead.
This is why we are passing function pointers to inline functions -
compiler can optimize that out.

I think that introducing ops indirect functions calls here is going to break
these assumptions and hurt performance.
Unless compiler can somehow figure it out and optimize?
I don't see how it's possible with ops pointer in memory
but maybe I'm wrong.

Was any effort taken to test effect of these patches on performance?

Thanks!



  reply	other threads:[~2022-12-27  7:04 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-27  2:25 [RFC PATCH 0/6] Introduce a vringh accessor for IO memory Shunsuke Mie
2022-12-27  2:25 ` Shunsuke Mie
2022-12-27  2:25 ` [RFC PATCH 1/9] vringh: fix a typo in comments for vringh_kiov Shunsuke Mie
2022-12-27  2:25   ` Shunsuke Mie
2022-12-27  2:25 ` [RFC PATCH 2/9] vringh: remove vringh_iov and unite to vringh_kiov Shunsuke Mie
2022-12-27  2:25   ` Shunsuke Mie
2022-12-27  6:04   ` Jason Wang
2022-12-27  6:04     ` Jason Wang
2022-12-27  7:05     ` Michael S. Tsirkin
2022-12-27  7:05       ` Michael S. Tsirkin
2022-12-27  7:13       ` Shunsuke Mie
2022-12-27  7:13         ` Shunsuke Mie
2022-12-27  7:56         ` Michael S. Tsirkin
2022-12-27  7:56           ` Michael S. Tsirkin
2022-12-27  7:57           ` Shunsuke Mie
2022-12-27  7:57             ` Shunsuke Mie
2022-12-27  7:05     ` Shunsuke Mie
2022-12-27  7:05       ` Shunsuke Mie
2022-12-28  6:36       ` Jason Wang
2022-12-28  6:36         ` Jason Wang
2023-01-11  3:26         ` Shunsuke Mie
2023-01-11  3:26           ` Shunsuke Mie
2023-01-11  5:54           ` Jason Wang
2023-01-11  5:54             ` Jason Wang
2023-01-11  6:19             ` Shunsuke Mie
2023-01-11  6:19               ` Shunsuke Mie
2022-12-27  2:25 ` [RFC PATCH 3/9] tools/virtio: convert to new vringh user APIs Shunsuke Mie
2022-12-27  2:25   ` Shunsuke Mie
2022-12-27  2:25 ` [RFC PATCH 4/9] vringh: unify the APIs for all accessors Shunsuke Mie
2022-12-27  2:25   ` Shunsuke Mie
2022-12-27  7:04   ` Michael S. Tsirkin [this message]
2022-12-27  7:04     ` Michael S. Tsirkin
2022-12-27  7:49     ` Shunsuke Mie
2022-12-27  7:49       ` Shunsuke Mie
2022-12-27 10:22       ` Shunsuke Mie
2022-12-27 10:22         ` Shunsuke Mie
2022-12-27 14:37         ` Michael S. Tsirkin
2022-12-27 14:37           ` Michael S. Tsirkin
2022-12-28  2:24           ` Shunsuke Mie
2022-12-28  2:24             ` Shunsuke Mie
2022-12-28  7:20             ` Michael S. Tsirkin
2022-12-28  7:20               ` Michael S. Tsirkin
2023-01-11  4:10               ` Shunsuke Mie
2023-01-11  4:10                 ` Shunsuke Mie
2022-12-27  2:25 ` [RFC PATCH 5/9] tools/virtio: convert to use new unified vringh APIs Shunsuke Mie
2022-12-27  2:25   ` Shunsuke Mie
2022-12-27  2:25 ` [RFC PATCH 6/9] caif_virtio: convert to " Shunsuke Mie
2022-12-27  2:25   ` Shunsuke Mie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221227020007-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mie@igel.co.jp \
    --cc=netdev@vger.kernel.org \
    --cc=rusty@rustcorp.com.au \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.