All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated
Date: Mon, 26 Nov 2018 12:01:59 +0800	[thread overview]
Message-ID: <99e6b6b0-3cc6-b100-1e60-aa837d293bc8@redhat.com> (raw)
In-Reply-To: <20181123103750-mutt-send-email-mst@kernel.org>


On 2018/11/23 下午11:41, Michael S. Tsirkin wrote:
> On Fri, Nov 23, 2018 at 11:00:16AM +0800, Jason Wang wrote:
>> Device use descriptors table in order, so there's no need to read
>> index from available ring. This eliminate the cache contention on
>> avail ring completely.
> Well this isn't what the in order feature says in the spec.
>
> It forces the used ring to be in the same order as
> the available ring. So I don't think you can skip
> checking the available ring.


Maybe I miss something. The spec 
(https://github.com/oasis-tcs/virtio-spec master) said: "If 
VIRTIO_F_IN_ORDER has been negotiated, driver uses descriptors in ring 
order: starting from offset 0 in the table, and wrapping around at the 
end of the table."

Even if I was wrong, maybe it's time to force this consider the obvious 
improvement it brings? And maybe what you said is the reason that we 
only allow the following optimization only for packed ring?

"notify the use of a batch of buffers to the driver by only writing out 
a single used descriptor with the Buffer ID corresponding to the last 
descriptor in the batch. "

This seems another good optimization for packed ring as well.


> And in fact depending on
> ring size and workload, using all of descriptor buffer might
> cause a slowdown.


This is not the sin of in order but the size of the queue I believe?


> Rather you should be able to get
> about the same speedup, but from skipping checking
> the used ring in virtio.


Yes, I've made such changes in virtio-net pmd. But since we're testing 
it with vhost-kernel, the main contention was on available. So the 
improvement was not obvious.

Thanks


>
>
>> Virito-user + vhost_kernel + XDP_DROP gives about ~10% improvement on
>> TX from 4.8Mpps to 5.3Mpps on Intel(R) Core(TM) i7-5600U CPU @
>> 2.60GHz.
>>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>>   drivers/vhost/vhost.c | 19 ++++++++++++-------
>>   1 file changed, 12 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
>> index 3a5f81a66d34..c8be151bc897 100644
>> --- a/drivers/vhost/vhost.c
>> +++ b/drivers/vhost/vhost.c
>> @@ -2002,6 +2002,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
>>   	__virtio16 avail_idx;
>>   	__virtio16 ring_head;
>>   	int ret, access;
>> +	bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
>>   
>>   	/* Check it isn't doing very strange things with descriptor numbers. */
>>   	last_avail_idx = vq->last_avail_idx;
>> @@ -2034,15 +2035,19 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
>>   
>>   	/* Grab the next descriptor number they're advertising, and increment
>>   	 * the index we've seen. */
>> -	if (unlikely(vhost_get_avail(vq, ring_head,
>> -		     &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) {
>> -		vq_err(vq, "Failed to read head: idx %d address %p\n",
>> -		       last_avail_idx,
>> -		       &vq->avail->ring[last_avail_idx % vq->num]);
>> -		return -EFAULT;
>> +	if (!in_order) {
>> +		if (unlikely(vhost_get_avail(vq, ring_head,
>> +		    &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) {
>> +			vq_err(vq, "Failed to read head: idx %d address %p\n",
>> +				last_avail_idx,
>> +				&vq->avail->ring[last_avail_idx % vq->num]);
>> +			return -EFAULT;
>> +		}
>> +		head = vhost16_to_cpu(vq, ring_head);
>> +	} else {
>> +		head = last_avail_idx & (vq->num - 1);
>>   	}
>>   
>> -	head = vhost16_to_cpu(vq, ring_head);
>>   
>>   	/* If their number is silly, that's an error. */
>>   	if (unlikely(head >= vq->num)) {
>> -- 
>> 2.17.1

WARNING: multiple messages have this Message-ID (diff)
From: Jason Wang <jasowang@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org
Subject: Re: [PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated
Date: Mon, 26 Nov 2018 12:01:59 +0800	[thread overview]
Message-ID: <99e6b6b0-3cc6-b100-1e60-aa837d293bc8@redhat.com> (raw)
In-Reply-To: <20181123103750-mutt-send-email-mst@kernel.org>


On 2018/11/23 下午11:41, Michael S. Tsirkin wrote:
> On Fri, Nov 23, 2018 at 11:00:16AM +0800, Jason Wang wrote:
>> Device use descriptors table in order, so there's no need to read
>> index from available ring. This eliminate the cache contention on
>> avail ring completely.
> Well this isn't what the in order feature says in the spec.
>
> It forces the used ring to be in the same order as
> the available ring. So I don't think you can skip
> checking the available ring.


Maybe I miss something. The spec 
(https://github.com/oasis-tcs/virtio-spec master) said: "If 
VIRTIO_F_IN_ORDER has been negotiated, driver uses descriptors in ring 
order: starting from offset 0 in the table, and wrapping around at the 
end of the table."

Even if I was wrong, maybe it's time to force this consider the obvious 
improvement it brings? And maybe what you said is the reason that we 
only allow the following optimization only for packed ring?

"notify the use of a batch of buffers to the driver by only writing out 
a single used descriptor with the Buffer ID corresponding to the last 
descriptor in the batch. "

This seems another good optimization for packed ring as well.


> And in fact depending on
> ring size and workload, using all of descriptor buffer might
> cause a slowdown.


This is not the sin of in order but the size of the queue I believe?


> Rather you should be able to get
> about the same speedup, but from skipping checking
> the used ring in virtio.


Yes, I've made such changes in virtio-net pmd. But since we're testing 
it with vhost-kernel, the main contention was on available. So the 
improvement was not obvious.

Thanks


>
>
>> Virito-user + vhost_kernel + XDP_DROP gives about ~10% improvement on
>> TX from 4.8Mpps to 5.3Mpps on Intel(R) Core(TM) i7-5600U CPU @
>> 2.60GHz.
>>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>>   drivers/vhost/vhost.c | 19 ++++++++++++-------
>>   1 file changed, 12 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
>> index 3a5f81a66d34..c8be151bc897 100644
>> --- a/drivers/vhost/vhost.c
>> +++ b/drivers/vhost/vhost.c
>> @@ -2002,6 +2002,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
>>   	__virtio16 avail_idx;
>>   	__virtio16 ring_head;
>>   	int ret, access;
>> +	bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
>>   
>>   	/* Check it isn't doing very strange things with descriptor numbers. */
>>   	last_avail_idx = vq->last_avail_idx;
>> @@ -2034,15 +2035,19 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
>>   
>>   	/* Grab the next descriptor number they're advertising, and increment
>>   	 * the index we've seen. */
>> -	if (unlikely(vhost_get_avail(vq, ring_head,
>> -		     &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) {
>> -		vq_err(vq, "Failed to read head: idx %d address %p\n",
>> -		       last_avail_idx,
>> -		       &vq->avail->ring[last_avail_idx % vq->num]);
>> -		return -EFAULT;
>> +	if (!in_order) {
>> +		if (unlikely(vhost_get_avail(vq, ring_head,
>> +		    &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) {
>> +			vq_err(vq, "Failed to read head: idx %d address %p\n",
>> +				last_avail_idx,
>> +				&vq->avail->ring[last_avail_idx % vq->num]);
>> +			return -EFAULT;
>> +		}
>> +		head = vhost16_to_cpu(vq, ring_head);
>> +	} else {
>> +		head = last_avail_idx & (vq->num - 1);
>>   	}
>>   
>> -	head = vhost16_to_cpu(vq, ring_head);
>>   
>>   	/* If their number is silly, that's an error. */
>>   	if (unlikely(head >= vq->num)) {
>> -- 
>> 2.17.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2018-11-26  4:02 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-23  3:00 [PATCH net-next 0/3] basic in order support for vhost_net Jason Wang
2018-11-23  3:00 ` [PATCH net-next 1/3] virtio: introduce in order feature bit Jason Wang
2018-11-23  3:00 ` Jason Wang
2018-11-23  3:00 ` [PATCH net-next 2/3] vhost_net: support in order feature Jason Wang
2018-11-23  3:00 ` Jason Wang
2018-11-23 15:49   ` Michael S. Tsirkin
2018-11-26  3:52     ` Jason Wang
2018-11-26  3:52       ` Jason Wang
2018-11-23 15:49   ` Michael S. Tsirkin
2018-11-23  3:00 ` [PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated Jason Wang
2018-11-23  3:00 ` Jason Wang
2018-11-23 15:41   ` Michael S. Tsirkin
2018-11-23 15:41   ` Michael S. Tsirkin
2018-11-26  4:01     ` Jason Wang [this message]
2018-11-26  4:01       ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=99e6b6b0-3cc6-b100-1e60-aa837d293bc8@redhat.com \
    --to=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.