From: Lorenzo Bianconi <lorenzo@kernel.org>
To: netdev@vger.kernel.org
Cc: bpf@vger.kernel.org, lorenzo.bianconi@redhat.com,
davem@davemloft.net, kuba@kernel.org, brouer@redhat.com,
ilias.apalodimas@linaro.org
Subject: [PATCH v3 net-next 1/5] net: xdp: introduce bulking for xdp tx return path
Date: Wed, 4 Nov 2020 11:22:54 +0100 [thread overview]
Message-ID: <5ef0c2886518d8ae1577c8b60ea6ef55d031673e.1604484917.git.lorenzo@kernel.org> (raw)
In-Reply-To: <cover.1604484917.git.lorenzo@kernel.org>
XDP bulk APIs introduce a defer/flush mechanism to return
pages belonging to the same xdp_mem_allocator object
(identified via the mem.id field) in bulk to optimize
I-cache and D-cache since xdp_return_frame is usually run
inside the driver NAPI tx completion loop.
The bulk queue size is set to 16 to be aligned to how
XDP_REDIRECT bulking works. The bulk is flushed when
it is full or when mem.id changes.
xdp_frame_bulk is usually stored/allocated on the function
call-stack to avoid locking penalties.
Current implementation considers only page_pool memory model.
Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
include/net/xdp.h | 9 +++++++
net/core/xdp.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 70 insertions(+)
diff --git a/include/net/xdp.h b/include/net/xdp.h
index 3814fb631d52..a1f48a73e6df 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -104,6 +104,12 @@ struct xdp_frame {
struct net_device *dev_rx; /* used by cpumap */
};
+#define XDP_BULK_QUEUE_SIZE 16
+struct xdp_frame_bulk {
+ int count;
+ void *xa;
+ void *q[XDP_BULK_QUEUE_SIZE];
+};
static inline struct skb_shared_info *
xdp_get_shared_info_from_frame(struct xdp_frame *frame)
@@ -194,6 +200,9 @@ struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp)
void xdp_return_frame(struct xdp_frame *xdpf);
void xdp_return_frame_rx_napi(struct xdp_frame *xdpf);
void xdp_return_buff(struct xdp_buff *xdp);
+void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq);
+void xdp_return_frame_bulk(struct xdp_frame *xdpf,
+ struct xdp_frame_bulk *bq);
/* When sending xdp_frame into the network stack, then there is no
* return point callback, which is needed to release e.g. DMA-mapping
diff --git a/net/core/xdp.c b/net/core/xdp.c
index 48aba933a5a8..66ac275a0360 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -380,6 +380,67 @@ void xdp_return_frame_rx_napi(struct xdp_frame *xdpf)
}
EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi);
+/* XDP bulk APIs introduce a defer/flush mechanism to return
+ * pages belonging to the same xdp_mem_allocator object
+ * (identified via the mem.id field) in bulk to optimize
+ * I-cache and D-cache.
+ * The bulk queue size is set to 16 to be aligned to how
+ * XDP_REDIRECT bulking works. The bulk is flushed when
+ * it is full or when mem.id changes.
+ * xdp_frame_bulk is usually stored/allocated on the function
+ * call-stack to avoid locking penalties.
+ */
+void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq)
+{
+ struct xdp_mem_allocator *xa = bq->xa;
+ int i;
+
+ if (unlikely(!xa))
+ return;
+
+ for (i = 0; i < bq->count; i++) {
+ struct page *page = virt_to_head_page(bq->q[i]);
+
+ page_pool_put_full_page(xa->page_pool, page, false);
+ }
+ bq->count = 0;
+}
+EXPORT_SYMBOL_GPL(xdp_flush_frame_bulk);
+
+void xdp_return_frame_bulk(struct xdp_frame *xdpf,
+ struct xdp_frame_bulk *bq)
+{
+ struct xdp_mem_info *mem = &xdpf->mem;
+ struct xdp_mem_allocator *xa;
+
+ if (mem->type != MEM_TYPE_PAGE_POOL) {
+ __xdp_return(xdpf->data, &xdpf->mem, false);
+ return;
+ }
+
+ rcu_read_lock();
+
+ xa = bq->xa;
+ if (unlikely(!xa)) {
+ xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
+ bq->count = 0;
+ bq->xa = xa;
+ }
+
+ if (bq->count == XDP_BULK_QUEUE_SIZE)
+ xdp_flush_frame_bulk(bq);
+
+ if (mem->id != xa->mem.id) {
+ xdp_flush_frame_bulk(bq);
+ bq->xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
+ }
+
+ bq->q[bq->count++] = xdpf->data;
+
+ rcu_read_unlock();
+}
+EXPORT_SYMBOL_GPL(xdp_return_frame_bulk);
+
void xdp_return_buff(struct xdp_buff *xdp)
{
__xdp_return(xdp->data, &xdp->rxq->mem, true);
--
2.26.2
next prev parent reply other threads:[~2020-11-04 10:24 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-04 10:22 [PATCH v3 net-next 0/5] xdp: introduce bulking for page_pool tx return path Lorenzo Bianconi
2020-11-04 10:22 ` Lorenzo Bianconi [this message]
2020-11-04 11:11 ` [PATCH v3 net-next 1/5] net: xdp: introduce bulking for xdp " Jesper Dangaard Brouer
2020-11-04 11:19 ` Lorenzo Bianconi
2020-11-04 12:28 ` Jesper Dangaard Brouer
2020-11-04 12:53 ` Lorenzo Bianconi
2020-11-04 10:22 ` [PATCH v3 net-next 2/5] net: page_pool: add bulk support for ptr_ring Lorenzo Bianconi
2020-11-04 12:24 ` Jesper Dangaard Brouer
2020-11-04 14:49 ` Lorenzo Bianconi
2020-11-04 10:22 ` [PATCH v3 net-next 3/5] net: mvneta: add xdp tx return bulking support Lorenzo Bianconi
2020-11-04 10:22 ` [PATCH v3 net-next 4/5] net: mvpp2: " Lorenzo Bianconi
2020-11-04 10:22 ` [PATCH v3 net-next 5/5] net: mlx5: " Lorenzo Bianconi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5ef0c2886518d8ae1577c8b60ea6ef55d031673e.1604484917.git.lorenzo@kernel.org \
--to=lorenzo@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brouer@redhat.com \
--cc=davem@davemloft.net \
--cc=ilias.apalodimas@linaro.org \
--cc=kuba@kernel.org \
--cc=lorenzo.bianconi@redhat.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).