linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] virtio_net: fix PAGE_SIZE > 64k
@ 2017-01-23 19:37 Michael S. Tsirkin
  2017-01-24 19:42 ` David Miller
  0 siblings, 1 reply; 12+ messages in thread
From: Michael S. Tsirkin @ 2017-01-23 19:37 UTC (permalink / raw)
  To: linux-kernel; +Cc: Jason Wang, virtualization, netdev, John Fastabend

I don't have any guests with PAGE_SIZE > 64k but the
code seems to be clearly broken in that case
as PAGE_SIZE / MERGEABLE_BUFFER_ALIGN will need
more than 8 bit and so the code in mergeable_ctx_to_buf_address
does not give us the actual true size.

Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---

changes from v1:
	fix build warnings

 drivers/net/virtio_net.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 4a10500..4dc373b 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -48,8 +48,16 @@ module_param(gso, bool, 0444);
  */
 DECLARE_EWMA(pkt_len, 1, 64)
 
+/* With mergeable buffers we align buffer address and use the low bits to
+ * encode its true size. Buffer size is up to 1 page so we need to align to
+ * square root of page size to ensure we reserve enough bits to encode the true
+ * size.
+ */
+#define MERGEABLE_BUFFER_MIN_ALIGN_SHIFT ((PAGE_SHIFT + 1) / 2)
+
 /* Minimum alignment for mergeable packet buffers. */
-#define MERGEABLE_BUFFER_ALIGN max(L1_CACHE_BYTES, 256)
+#define MERGEABLE_BUFFER_ALIGN max(L1_CACHE_BYTES, \
+				   1 << MERGEABLE_BUFFER_MIN_ALIGN_SHIFT)
 
 #define VIRTNET_DRIVER_VERSION "1.0.0"
 
-- 
MST

^ permalink raw reply related	[flat|nested] 12+ messages in thread
* Re: [PATCH v2] virtio_net: fix PAGE_SIZE > 64k
@ 2017-01-25  4:07 Alexei Starovoitov
  2017-01-25 14:23 ` Michael S. Tsirkin
  0 siblings, 1 reply; 12+ messages in thread
From: Alexei Starovoitov @ 2017-01-25  4:07 UTC (permalink / raw)
  To: John Fastabend
  Cc: Michael S. Tsirkin, David Miller, linux-kernel, Jason Wang,
	virtualization, netdev

On Tue, Jan 24, 2017 at 7:48 PM, John Fastabend
<john.fastabend@gmail.com> wrote:
>
> It is a concern on my side. I want XDP and Linux stack to work
> reasonably well together.

btw the micro benchmarks showed that page per packet approach
that xdp took in mlx4 should be 10% slower vs normal operation
for tcp/ip stack. We thought that for our LB use case
it will be an acceptable slowdown, but turned out that overall we
got a performance boost, since xdp model simplified user space
and got data path faster, so we magically got extra free cpu
that is used for other apps on the same host and overall
perf win despite extra overhead in tcp/ip.
Not all use cases are the same and not everyone will be as lucky,
so I'd like to see performance of xdp_pass improving too, though
it turned out to be not as high priority as I initially estimated.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-01-25 14:23 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-23 19:37 [PATCH v2] virtio_net: fix PAGE_SIZE > 64k Michael S. Tsirkin
2017-01-24 19:42 ` David Miller
2017-01-24 19:53   ` Michael S. Tsirkin
2017-01-24 20:09     ` David Miller
2017-01-24 20:45       ` Michael S. Tsirkin
2017-01-24 20:53         ` David Miller
2017-01-24 21:07           ` Michael S. Tsirkin
2017-01-24 21:10             ` David Miller
2017-01-24 21:56               ` Michael S. Tsirkin
2017-01-25  3:48                 ` John Fastabend
2017-01-25  4:07 Alexei Starovoitov
2017-01-25 14:23 ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).