linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Guo Zhi <qtxuning1999@sjtu.edu.cn>
To: jasowang <jasowang@redhat.com>
Cc: eperezma <eperezma@redhat.com>, sgarzare <sgarzare@redhat.com>,
	Michael Tsirkin <mst@redhat.com>, netdev <netdev@vger.kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	kvm list <kvm@vger.kernel.org>,
	virtualization <virtualization@lists.linux-foundation.org>
Subject: Re: [RFC 1/5] vhost: reorder used descriptors in a batch
Date: Tue, 2 Aug 2022 21:54:55 +0800 (CST)	[thread overview]
Message-ID: <401747890.4486725.1659448495048.JavaMail.zimbra@sjtu.edu.cn> (raw)
In-Reply-To: <16a232ad-e0a1-fd4c-ae3e-27db168daacb@redhat.com>

----- Original Message -----
> From: "jasowang" <jasowang@redhat.com>
> To: "Guo Zhi" <qtxuning1999@sjtu.edu.cn>, "eperezma" <eperezma@redhat.com>, "sgarzare" <sgarzare@redhat.com>, "Michael
> Tsirkin" <mst@redhat.com>
> Cc: "netdev" <netdev@vger.kernel.org>, "linux-kernel" <linux-kernel@vger.kernel.org>, "kvm list" <kvm@vger.kernel.org>,
> "virtualization" <virtualization@lists.linux-foundation.org>
> Sent: Tuesday, July 26, 2022 3:36:01 PM
> Subject: Re: [RFC 1/5] vhost: reorder used descriptors in a batch

> 在 2022/7/21 16:43, Guo Zhi 写道:
>> Device may not use descriptors in order, for example, NIC and SCSI may
>> not call __vhost_add_used_n with buffers in order.  It's the task of
>> __vhost_add_used_n to order them.
> 
> 
> I'm not sure this is ture. Having ooo descriptors is probably by design
> to have better performance.
> 
> This might be obvious for device that may have elevator or QOS stuffs.
> 
> I suspect the right thing to do here is, for the device that can't
> perform better in the case of IN_ORDER, let's simply not offer IN_ORDER
> (zerocopy or scsi). And for the device we know it can perform better,
> non-zercopy ethernet device we can do that.
> 

Hi, it seems that you don't like define in order feature as a transparent feature.

If we move the in_order treatment to the device specific code (net.c, scsi.c, ...):

The in_order feature bit would be declared in net.c, and not in vhost.c, Only specific device(eg, net, vsock) support in order feature and expose used descriptors in order. 
The code of vhost.c would be untouched or almost untouched, and only the code in net.c,scsi.c needs to be modified, the device will do batching job by itself.
This can achieve the best performance for that device which use desceriptors in order.

If this is better, I will send a new version patches for this RFC.

> 
>>   This commit reorder the buffers using
>> vq->heads, only the batch is begin from the expected start point and is
>> continuous can the batch be exposed to driver.  And only writing out a
>> single used ring for a batch of descriptors, according to VIRTIO 1.1
>> spec.
> 
> 
> So this sounds more like a "workaround" of the device that can't consume
> buffer in order, I suspect it can help in performance.
> 
> More below.
> 
> 
>>
>> Signed-off-by: Guo Zhi <qtxuning1999@sjtu.edu.cn>
>> ---
>>   drivers/vhost/vhost.c | 44 +++++++++++++++++++++++++++++++++++++++++--
>>   drivers/vhost/vhost.h |  3 +++
>>   2 files changed, 45 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
>> index 40097826c..e2e77e29f 100644
>> --- a/drivers/vhost/vhost.c
>> +++ b/drivers/vhost/vhost.c
>> @@ -317,6 +317,7 @@ static void vhost_vq_reset(struct vhost_dev *dev,
>>   	vq->used_flags = 0;
>>   	vq->log_used = false;
>>   	vq->log_addr = -1ull;
>> +	vq->next_used_head_idx = 0;
>>   	vq->private_data = NULL;
>>   	vq->acked_features = 0;
>>   	vq->acked_backend_features = 0;
>> @@ -398,6 +399,8 @@ static long vhost_dev_alloc_iovecs(struct vhost_dev *dev)
>>   					  GFP_KERNEL);
>>   		if (!vq->indirect || !vq->log || !vq->heads)
>>   			goto err_nomem;
>> +
>> +		memset(vq->heads, 0, sizeof(*vq->heads) * dev->iov_limit);
>>   	}
>>   	return 0;
>>   
>> @@ -2374,12 +2377,49 @@ static int __vhost_add_used_n(struct vhost_virtqueue
>> *vq,
>>   			    unsigned count)
>>   {
>>   	vring_used_elem_t __user *used;
>> +	struct vring_desc desc;
>>   	u16 old, new;
>>   	int start;
>> +	int begin, end, i;
>> +	int copy_n = count;
>> +
>> +	if (vhost_has_feature(vq, VIRTIO_F_IN_ORDER)) {
> 
> 
> How do you guarantee that ids of heads are contiguous?
> 
> 
>> +		/* calculate descriptor chain length for each used buffer */
> 
> 
> I'm a little bit confused about this comment, we have heads[i].len for this?
> 
> 
>> +		for (i = 0; i < count; i++) {
>> +			begin = heads[i].id;
>> +			end = begin;
>> +			vq->heads[begin].len = 0;
> 
> 
> Does this work for e.g RX virtqueue?
> 
> 
>> +			do {
>> +				vq->heads[begin].len += 1;
>> +				if (unlikely(vhost_get_desc(vq, &desc, end))) {
> 
> 
> Let's try hard to avoid more userspace copy here, it's the source of
> performance regression.
> 
> Thanks
> 
> 
>> +					vq_err(vq, "Failed to get descriptor: idx %d addr %p\n",
>> +					       end, vq->desc + end);
>> +					return -EFAULT;
>> +				}
>> +			} while ((end = next_desc(vq, &desc)) != -1);
>> +		}
>> +
>> +		count = 0;
>> +		/* sort and batch continuous used ring entry */
>> +		while (vq->heads[vq->next_used_head_idx].len != 0) {
>> +			count++;
>> +			i = vq->next_used_head_idx;
>> +			vq->next_used_head_idx = (vq->next_used_head_idx +
>> +						  vq->heads[vq->next_used_head_idx].len)
>> +						  % vq->num;
>> +			vq->heads[i].len = 0;
>> +		}
>> +		/* only write out a single used ring entry with the id corresponding
>> +		 * to the head entry of the descriptor chain describing the last buffer
>> +		 * in the batch.
>> +		 */
>> +		heads[0].id = i;
>> +		copy_n = 1;
>> +	}
>>   
>>   	start = vq->last_used_idx & (vq->num - 1);
>>   	used = vq->used->ring + start;
>> -	if (vhost_put_used(vq, heads, start, count)) {
>> +	if (vhost_put_used(vq, heads, start, copy_n)) {
>>   		vq_err(vq, "Failed to write used");
>>   		return -EFAULT;
>>   	}
>> @@ -2410,7 +2450,7 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct
>> vring_used_elem *heads,
>>   
>>   	start = vq->last_used_idx & (vq->num - 1);
>>   	n = vq->num - start;
>> -	if (n < count) {
>> +	if (n < count && !vhost_has_feature(vq, VIRTIO_F_IN_ORDER)) {
>>   		r = __vhost_add_used_n(vq, heads, n);
>>   		if (r < 0)
>>   			return r;
>> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
>> index d9109107a..7b2c0fbb5 100644
>> --- a/drivers/vhost/vhost.h
>> +++ b/drivers/vhost/vhost.h
>> @@ -107,6 +107,9 @@ struct vhost_virtqueue {
>>   	bool log_used;
>>   	u64 log_addr;
>>   
>> +	/* Sort heads in order */
>> +	u16 next_used_head_idx;
>> +
>>   	struct iovec iov[UIO_MAXIOV];
>>   	struct iovec iotlb_iov[64];
>>   	struct iovec *indirect;

  parent reply	other threads:[~2022-08-02 13:55 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-21  8:43 [RFC 0/5] In virtio-spec 1.1, new feature bit VIRTIO_F_IN_ORDER was introduced Guo Zhi
2022-07-21  8:43 ` [RFC 1/5] vhost: reorder used descriptors in a batch Guo Zhi
2022-07-22  7:07   ` Eugenio Perez Martin
2022-08-02  3:30     ` Guo Zhi
2022-07-26  7:36   ` Jason Wang
     [not found]     ` <2a8838c4-2e6f-6de7-dcdc-572699ff3dc9@sjtu.edu.cn>
2022-07-29  7:32       ` Jason Wang
2022-08-02  3:09         ` Guo Zhi
2022-08-02 14:12         ` Guo Zhi
2022-08-04  5:04           ` Jason Wang
2022-08-11  8:58             ` Guo Zhi
2022-08-02 13:54     ` Guo Zhi [this message]
2022-07-21  8:43 ` [RFC 2/5] vhost: announce VIRTIO_F_IN_ORDER support Guo Zhi
2022-07-21  8:43 ` [RFC 3/5] vhost_test: batch used buffer Guo Zhi
2022-07-22  7:12   ` Eugenio Perez Martin
2022-08-02  2:47     ` Guo Zhi
2022-08-02  3:08     ` Guo Zhi
     [not found]     ` <1D1ABF88-B503-4BE0-AC83-3326EAA62510@sjtu.edu.cn>
2022-08-02  7:45       ` Stefano Garzarella
2022-07-21  8:43 ` [RFC 4/5] virtio: get desc id in order Guo Zhi
2022-07-26  8:07   ` Jason Wang
2022-07-28  8:12     ` Guo Zhi
2022-08-11  8:49     ` Guo Zhi
2022-07-21  8:43 ` [RFC 5/5] virtio: annouce VIRTIO_F_IN_ORDER support Guo Zhi
2022-07-21  9:17 ` [RFC 0/5] In virtio-spec 1.1, new feature bit VIRTIO_F_IN_ORDER was introduced Jason Wang
2022-07-21 11:54   ` Guo Zhi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=401747890.4486725.1659448495048.JavaMail.zimbra@sjtu.edu.cn \
    --to=qtxuning1999@sjtu.edu.cn \
    --cc=eperezma@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=sgarzare@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).