From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69E08C63798 for ; Wed, 18 Nov 2020 08:25:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 135B0246BB for ; Wed, 18 Nov 2020 08:25:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727535AbgKRIZS (ORCPT ); Wed, 18 Nov 2020 03:25:18 -0500 Received: from out30-133.freemail.mail.aliyun.com ([115.124.30.133]:54216 "EHLO out30-133.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726296AbgKRIZR (ORCPT ); Wed, 18 Nov 2020 03:25:17 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01424;MF=xuanzhuo@linux.alibaba.com;NM=1;PH=DS;RN=12;SR=0;TI=SMTPD_---0UFn7TNs_1605687913; Received: from localhost(mailfrom:xuanzhuo@linux.alibaba.com fp:SMTPD_---0UFn7TNs_1605687913) by smtp.aliyun-inc.com(127.0.0.1); Wed, 18 Nov 2020 16:25:13 +0800 From: Xuan Zhuo To: bjorn.topel@intel.com Cc: Magnus Karlsson , Jonathan Lemon , "David S. Miller" , Jakub Kicinski , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/3] xsk: set tx/rx the min entries Date: Wed, 18 Nov 2020 16:25:10 +0800 Message-Id: X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: <3306b4d8-8689-b0e7-3f6d-c3ad873b7093@intel.com> In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We expect tx entries to be greater than twice the number of packets that the network card can send at a time, so that when the remaining number of the tx queue is less than half of the queue, it can be guaranteed that there are recycled items in the cq that can be used. At the same time, rx will not cause packet loss because it cannot receive the packets uploaded by the network card at one time. Of course, the 1024 here is only an estimated value, and the number of packets sent by each network card at a time may be different. Signed-off-by: Xuan Zhuo --- include/uapi/linux/if_xdp.h | 2 ++ net/xdp/xsk.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/include/uapi/linux/if_xdp.h b/include/uapi/linux/if_xdp.h index a78a809..d55ba79 100644 --- a/include/uapi/linux/if_xdp.h +++ b/include/uapi/linux/if_xdp.h @@ -64,6 +64,8 @@ struct xdp_mmap_offsets { #define XDP_STATISTICS 7 #define XDP_OPTIONS 8 +#define XDP_RXTX_RING_MIN_ENTRIES 1024 + struct xdp_umem_reg { __u64 addr; /* Start of packet data area */ __u64 len; /* Length of packet data area */ diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index bc3d4ece..e62c795 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -831,6 +831,8 @@ static int xsk_setsockopt(struct socket *sock, int level, int optname, return -EINVAL; if (copy_from_sockptr(&entries, optval, sizeof(entries))) return -EFAULT; + if (entries < XDP_RXTX_RING_MIN_ENTRIES) + return -EINVAL; mutex_lock(&xs->mutex); if (xs->state != XSK_READY) { -- 1.8.3.1