linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/3] basic in order support for vhost_net
@ 2018-11-23  3:00 Jason Wang
  2018-11-23  3:00 ` [PATCH net-next 1/3] virtio: introduce in order feature bit Jason Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Jason Wang @ 2018-11-23  3:00 UTC (permalink / raw)
  To: mst, jasowang, kvm, virtualization, netdev, linux-kernel

Hi:

This series implement basic in order feature support for
vhost_net. This feature requires both driver and device to use
descriptors in order which can simplify the implementation and
optimizaton for both side. The series also implement a simple
optimization that avoid read available ring. Test shows 10%
performance improvement.

More optimizations could be done on top.

Jason Wang (3):
  virtio: introduce in order feature bit
  vhost_net: support in order feature
  vhost: don't touch avail ring if in_order is negotiated

 drivers/vhost/net.c                |  6 ++++--
 drivers/vhost/vhost.c              | 19 ++++++++++++-------
 include/uapi/linux/virtio_config.h |  6 ++++++
 3 files changed, 22 insertions(+), 9 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH net-next 1/3] virtio: introduce in order feature bit
  2018-11-23  3:00 [PATCH net-next 0/3] basic in order support for vhost_net Jason Wang
@ 2018-11-23  3:00 ` Jason Wang
  2018-11-23  3:00 ` [PATCH net-next 2/3] vhost_net: support in order feature Jason Wang
  2018-11-23  3:00 ` [PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated Jason Wang
  2 siblings, 0 replies; 8+ messages in thread
From: Jason Wang @ 2018-11-23  3:00 UTC (permalink / raw)
  To: mst, jasowang, kvm, virtualization, netdev, linux-kernel

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 include/uapi/linux/virtio_config.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h
index 449132c76b1c..64496afc016d 100644
--- a/include/uapi/linux/virtio_config.h
+++ b/include/uapi/linux/virtio_config.h
@@ -75,6 +75,12 @@
  */
 #define VIRTIO_F_IOMMU_PLATFORM		33
 
+/*
+ * Device uses buffers in the same order in which they have been
+ * available.
+ */
+#define VIRTIO_F_IN_ORDER		35
+
 /*
  * Does the device support Single Root I/O Virtualization?
  */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH net-next 2/3] vhost_net: support in order feature
  2018-11-23  3:00 [PATCH net-next 0/3] basic in order support for vhost_net Jason Wang
  2018-11-23  3:00 ` [PATCH net-next 1/3] virtio: introduce in order feature bit Jason Wang
@ 2018-11-23  3:00 ` Jason Wang
  2018-11-23 15:49   ` Michael S. Tsirkin
  2018-11-23  3:00 ` [PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated Jason Wang
  2 siblings, 1 reply; 8+ messages in thread
From: Jason Wang @ 2018-11-23  3:00 UTC (permalink / raw)
  To: mst, jasowang, kvm, virtualization, netdev, linux-kernel

This makes vhost_net to support in order feature. This is as simple as
use datacopy path when it was negotiated. An alternative is not to
advertise in order when zerocopy is enabled which tends to be
suboptimal consider zerocopy may suffer from e.g HOL issues.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/net.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index d919284f103b..bdf5de5a7eb2 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -74,7 +74,8 @@ enum {
 	VHOST_NET_FEATURES = VHOST_FEATURES |
 			 (1ULL << VHOST_NET_F_VIRTIO_NET_HDR) |
 			 (1ULL << VIRTIO_NET_F_MRG_RXBUF) |
-			 (1ULL << VIRTIO_F_IOMMU_PLATFORM)
+			 (1ULL << VIRTIO_F_IOMMU_PLATFORM) |
+	                 (1ULL << VIRTIO_F_IN_ORDER)
 };
 
 enum {
@@ -971,7 +972,8 @@ static void handle_tx(struct vhost_net *net)
 	vhost_disable_notify(&net->dev, vq);
 	vhost_net_disable_vq(net, vq);
 
-	if (vhost_sock_zcopy(sock))
+	if (vhost_sock_zcopy(sock) &&
+	    !vhost_has_feature(vq, VIRTIO_F_IN_ORDER))
 		handle_tx_zerocopy(net, sock);
 	else
 		handle_tx_copy(net, sock);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated
  2018-11-23  3:00 [PATCH net-next 0/3] basic in order support for vhost_net Jason Wang
  2018-11-23  3:00 ` [PATCH net-next 1/3] virtio: introduce in order feature bit Jason Wang
  2018-11-23  3:00 ` [PATCH net-next 2/3] vhost_net: support in order feature Jason Wang
@ 2018-11-23  3:00 ` Jason Wang
  2018-11-23 15:41   ` Michael S. Tsirkin
  2 siblings, 1 reply; 8+ messages in thread
From: Jason Wang @ 2018-11-23  3:00 UTC (permalink / raw)
  To: mst, jasowang, kvm, virtualization, netdev, linux-kernel

Device use descriptors table in order, so there's no need to read
index from available ring. This eliminate the cache contention on
avail ring completely.

Virito-user + vhost_kernel + XDP_DROP gives about ~10% improvement on
TX from 4.8Mpps to 5.3Mpps on Intel(R) Core(TM) i7-5600U CPU @
2.60GHz.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/vhost.c | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 3a5f81a66d34..c8be151bc897 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -2002,6 +2002,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
 	__virtio16 avail_idx;
 	__virtio16 ring_head;
 	int ret, access;
+	bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
 
 	/* Check it isn't doing very strange things with descriptor numbers. */
 	last_avail_idx = vq->last_avail_idx;
@@ -2034,15 +2035,19 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
 
 	/* Grab the next descriptor number they're advertising, and increment
 	 * the index we've seen. */
-	if (unlikely(vhost_get_avail(vq, ring_head,
-		     &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) {
-		vq_err(vq, "Failed to read head: idx %d address %p\n",
-		       last_avail_idx,
-		       &vq->avail->ring[last_avail_idx % vq->num]);
-		return -EFAULT;
+	if (!in_order) {
+		if (unlikely(vhost_get_avail(vq, ring_head,
+		    &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) {
+			vq_err(vq, "Failed to read head: idx %d address %p\n",
+				last_avail_idx,
+				&vq->avail->ring[last_avail_idx % vq->num]);
+			return -EFAULT;
+		}
+		head = vhost16_to_cpu(vq, ring_head);
+	} else {
+		head = last_avail_idx & (vq->num - 1);
 	}
 
-	head = vhost16_to_cpu(vq, ring_head);
 
 	/* If their number is silly, that's an error. */
 	if (unlikely(head >= vq->num)) {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated
  2018-11-23  3:00 ` [PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated Jason Wang
@ 2018-11-23 15:41   ` Michael S. Tsirkin
  2018-11-26  4:01     ` Jason Wang
  0 siblings, 1 reply; 8+ messages in thread
From: Michael S. Tsirkin @ 2018-11-23 15:41 UTC (permalink / raw)
  To: Jason Wang; +Cc: kvm, virtualization, netdev, linux-kernel

On Fri, Nov 23, 2018 at 11:00:16AM +0800, Jason Wang wrote:
> Device use descriptors table in order, so there's no need to read
> index from available ring. This eliminate the cache contention on
> avail ring completely.

Well this isn't what the in order feature says in the spec.

It forces the used ring to be in the same order as
the available ring. So I don't think you can skip
checking the available ring. And in fact depending on
ring size and workload, using all of descriptor buffer might
cause a slowdown.
Rather you should be able to get
about the same speedup, but from skipping checking
the used ring in virtio.


> Virito-user + vhost_kernel + XDP_DROP gives about ~10% improvement on
> TX from 4.8Mpps to 5.3Mpps on Intel(R) Core(TM) i7-5600U CPU @
> 2.60GHz.
> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  drivers/vhost/vhost.c | 19 ++++++++++++-------
>  1 file changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 3a5f81a66d34..c8be151bc897 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -2002,6 +2002,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
>  	__virtio16 avail_idx;
>  	__virtio16 ring_head;
>  	int ret, access;
> +	bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
>  
>  	/* Check it isn't doing very strange things with descriptor numbers. */
>  	last_avail_idx = vq->last_avail_idx;
> @@ -2034,15 +2035,19 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
>  
>  	/* Grab the next descriptor number they're advertising, and increment
>  	 * the index we've seen. */
> -	if (unlikely(vhost_get_avail(vq, ring_head,
> -		     &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) {
> -		vq_err(vq, "Failed to read head: idx %d address %p\n",
> -		       last_avail_idx,
> -		       &vq->avail->ring[last_avail_idx % vq->num]);
> -		return -EFAULT;
> +	if (!in_order) {
> +		if (unlikely(vhost_get_avail(vq, ring_head,
> +		    &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) {
> +			vq_err(vq, "Failed to read head: idx %d address %p\n",
> +				last_avail_idx,
> +				&vq->avail->ring[last_avail_idx % vq->num]);
> +			return -EFAULT;
> +		}
> +		head = vhost16_to_cpu(vq, ring_head);
> +	} else {
> +		head = last_avail_idx & (vq->num - 1);
>  	}
>  
> -	head = vhost16_to_cpu(vq, ring_head);
>  
>  	/* If their number is silly, that's an error. */
>  	if (unlikely(head >= vq->num)) {
> -- 
> 2.17.1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next 2/3] vhost_net: support in order feature
  2018-11-23  3:00 ` [PATCH net-next 2/3] vhost_net: support in order feature Jason Wang
@ 2018-11-23 15:49   ` Michael S. Tsirkin
  2018-11-26  3:52     ` Jason Wang
  0 siblings, 1 reply; 8+ messages in thread
From: Michael S. Tsirkin @ 2018-11-23 15:49 UTC (permalink / raw)
  To: Jason Wang; +Cc: kvm, virtualization, netdev, linux-kernel

On Fri, Nov 23, 2018 at 11:00:15AM +0800, Jason Wang wrote:
> This makes vhost_net to support in order feature. This is as simple as
> use datacopy path when it was negotiated. An alternative is not to
> advertise in order when zerocopy is enabled which tends to be
> suboptimal consider zerocopy may suffer from e.g HOL issues.

Well IIRC vhost_zerocopy_signal_used is used to
actually reorder used ring to match available ring.
So with a big comment explaining why it is so,
we could just enable IN_ORDER there too.

> 
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  drivers/vhost/net.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index d919284f103b..bdf5de5a7eb2 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -74,7 +74,8 @@ enum {
>  	VHOST_NET_FEATURES = VHOST_FEATURES |
>  			 (1ULL << VHOST_NET_F_VIRTIO_NET_HDR) |
>  			 (1ULL << VIRTIO_NET_F_MRG_RXBUF) |
> -			 (1ULL << VIRTIO_F_IOMMU_PLATFORM)
> +			 (1ULL << VIRTIO_F_IOMMU_PLATFORM) |
> +	                 (1ULL << VIRTIO_F_IN_ORDER)
>  };
>  
>  enum {
> @@ -971,7 +972,8 @@ static void handle_tx(struct vhost_net *net)
>  	vhost_disable_notify(&net->dev, vq);
>  	vhost_net_disable_vq(net, vq);
>  
> -	if (vhost_sock_zcopy(sock))
> +	if (vhost_sock_zcopy(sock) &&
> +	    !vhost_has_feature(vq, VIRTIO_F_IN_ORDER))
>  		handle_tx_zerocopy(net, sock);
>  	else
>  		handle_tx_copy(net, sock);
> -- 
> 2.17.1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next 2/3] vhost_net: support in order feature
  2018-11-23 15:49   ` Michael S. Tsirkin
@ 2018-11-26  3:52     ` Jason Wang
  0 siblings, 0 replies; 8+ messages in thread
From: Jason Wang @ 2018-11-26  3:52 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, netdev, linux-kernel


On 2018/11/23 下午11:49, Michael S. Tsirkin wrote:
> On Fri, Nov 23, 2018 at 11:00:15AM +0800, Jason Wang wrote:
>> This makes vhost_net to support in order feature. This is as simple as
>> use datacopy path when it was negotiated. An alternative is not to
>> advertise in order when zerocopy is enabled which tends to be
>> suboptimal consider zerocopy may suffer from e.g HOL issues.
> Well IIRC vhost_zerocopy_signal_used is used to
> actually reorder used ring to match available ring.
> So with a big comment explaining why it is so,
> we could just enable IN_ORDER there too.
>

The problem is we allow switching between zerocopy and datacopy.

And what's more important, if we allow in order for zerocopy, a single 
packet delay may hang all the rest.

Thanks


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated
  2018-11-23 15:41   ` Michael S. Tsirkin
@ 2018-11-26  4:01     ` Jason Wang
  0 siblings, 0 replies; 8+ messages in thread
From: Jason Wang @ 2018-11-26  4:01 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization, netdev, linux-kernel


On 2018/11/23 下午11:41, Michael S. Tsirkin wrote:
> On Fri, Nov 23, 2018 at 11:00:16AM +0800, Jason Wang wrote:
>> Device use descriptors table in order, so there's no need to read
>> index from available ring. This eliminate the cache contention on
>> avail ring completely.
> Well this isn't what the in order feature says in the spec.
>
> It forces the used ring to be in the same order as
> the available ring. So I don't think you can skip
> checking the available ring.


Maybe I miss something. The spec 
(https://github.com/oasis-tcs/virtio-spec master) said: "If 
VIRTIO_F_IN_ORDER has been negotiated, driver uses descriptors in ring 
order: starting from offset 0 in the table, and wrapping around at the 
end of the table."

Even if I was wrong, maybe it's time to force this consider the obvious 
improvement it brings? And maybe what you said is the reason that we 
only allow the following optimization only for packed ring?

"notify the use of a batch of buffers to the driver by only writing out 
a single used descriptor with the Buffer ID corresponding to the last 
descriptor in the batch. "

This seems another good optimization for packed ring as well.


> And in fact depending on
> ring size and workload, using all of descriptor buffer might
> cause a slowdown.


This is not the sin of in order but the size of the queue I believe?


> Rather you should be able to get
> about the same speedup, but from skipping checking
> the used ring in virtio.


Yes, I've made such changes in virtio-net pmd. But since we're testing 
it with vhost-kernel, the main contention was on available. So the 
improvement was not obvious.

Thanks


>
>
>> Virito-user + vhost_kernel + XDP_DROP gives about ~10% improvement on
>> TX from 4.8Mpps to 5.3Mpps on Intel(R) Core(TM) i7-5600U CPU @
>> 2.60GHz.
>>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>>   drivers/vhost/vhost.c | 19 ++++++++++++-------
>>   1 file changed, 12 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
>> index 3a5f81a66d34..c8be151bc897 100644
>> --- a/drivers/vhost/vhost.c
>> +++ b/drivers/vhost/vhost.c
>> @@ -2002,6 +2002,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
>>   	__virtio16 avail_idx;
>>   	__virtio16 ring_head;
>>   	int ret, access;
>> +	bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
>>   
>>   	/* Check it isn't doing very strange things with descriptor numbers. */
>>   	last_avail_idx = vq->last_avail_idx;
>> @@ -2034,15 +2035,19 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
>>   
>>   	/* Grab the next descriptor number they're advertising, and increment
>>   	 * the index we've seen. */
>> -	if (unlikely(vhost_get_avail(vq, ring_head,
>> -		     &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) {
>> -		vq_err(vq, "Failed to read head: idx %d address %p\n",
>> -		       last_avail_idx,
>> -		       &vq->avail->ring[last_avail_idx % vq->num]);
>> -		return -EFAULT;
>> +	if (!in_order) {
>> +		if (unlikely(vhost_get_avail(vq, ring_head,
>> +		    &vq->avail->ring[last_avail_idx & (vq->num - 1)]))) {
>> +			vq_err(vq, "Failed to read head: idx %d address %p\n",
>> +				last_avail_idx,
>> +				&vq->avail->ring[last_avail_idx % vq->num]);
>> +			return -EFAULT;
>> +		}
>> +		head = vhost16_to_cpu(vq, ring_head);
>> +	} else {
>> +		head = last_avail_idx & (vq->num - 1);
>>   	}
>>   
>> -	head = vhost16_to_cpu(vq, ring_head);
>>   
>>   	/* If their number is silly, that's an error. */
>>   	if (unlikely(head >= vq->num)) {
>> -- 
>> 2.17.1

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-11-26  4:02 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-23  3:00 [PATCH net-next 0/3] basic in order support for vhost_net Jason Wang
2018-11-23  3:00 ` [PATCH net-next 1/3] virtio: introduce in order feature bit Jason Wang
2018-11-23  3:00 ` [PATCH net-next 2/3] vhost_net: support in order feature Jason Wang
2018-11-23 15:49   ` Michael S. Tsirkin
2018-11-26  3:52     ` Jason Wang
2018-11-23  3:00 ` [PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated Jason Wang
2018-11-23 15:41   ` Michael S. Tsirkin
2018-11-26  4:01     ` Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).