From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=DATE_IN_PAST_06_12, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F549C433FF for ; Tue, 30 Jul 2019 17:11:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 304B020C01 for ; Tue, 30 Jul 2019 17:11:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730013AbfG3RKK (ORCPT ); Tue, 30 Jul 2019 13:10:10 -0400 Received: from mga05.intel.com ([192.55.52.43]:33179 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729993AbfG3RKK (ORCPT ); Tue, 30 Jul 2019 13:10:10 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Jul 2019 10:10:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,327,1559545200"; d="scan'208";a="183192732" Received: from silpixa00399838.ir.intel.com (HELO silpixa00399838.ger.corp.intel.com) ([10.237.223.140]) by orsmga002.jf.intel.com with ESMTP; 30 Jul 2019 10:10:06 -0700 From: Kevin Laatz To: netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, bjorn.topel@intel.com, magnus.karlsson@intel.com, jakub.kicinski@netronome.com, jonathan.lemon@gmail.com, saeedm@mellanox.com, maximmi@mellanox.com, stephen@networkplumber.org Cc: bruce.richardson@intel.com, ciara.loftus@intel.com, bpf@vger.kernel.org, intel-wired-lan@lists.osuosl.org, Kevin Laatz Subject: [PATCH bpf-next v4 10/11] samples/bpf: use hugepages in xdpsock app Date: Tue, 30 Jul 2019 08:53:59 +0000 Message-Id: <20190730085400.10376-11-kevin.laatz@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190730085400.10376-1-kevin.laatz@intel.com> References: <20190724051043.14348-1-kevin.laatz@intel.com> <20190730085400.10376-1-kevin.laatz@intel.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org This patch modifies xdpsock to use mmap instead of posix_memalign. With this change, we can use hugepages when running the application in unaligned chunks mode. Using hugepages makes it more likely that we have physically contiguous memory, which supports the unaligned chunk mode better. Signed-off-by: Kevin Laatz --- samples/bpf/xdpsock_user.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c index 62b2059cd0e3..d1c61ec0e697 100644 --- a/samples/bpf/xdpsock_user.c +++ b/samples/bpf/xdpsock_user.c @@ -70,6 +70,7 @@ static int opt_poll; static int opt_interval = 1; static u16 opt_umem_flags; static int opt_unaligned_chunks; +static int opt_mmap_flags; static u32 opt_xdp_bind_flags; static int opt_xsk_frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE; static __u32 prog_id; @@ -435,6 +436,7 @@ static void parse_command_line(int argc, char **argv) case 'u': opt_umem_flags |= XDP_UMEM_UNALIGNED_CHUNK_FLAG; opt_unaligned_chunks = 1; + opt_mmap_flags = MAP_HUGETLB; break; case 'F': opt_xdp_flags &= ~XDP_FLAGS_UPDATE_IF_NOEXIST; @@ -694,11 +696,14 @@ int main(int argc, char **argv) exit(EXIT_FAILURE); } - ret = posix_memalign(&bufs, getpagesize(), /* PAGE_SIZE aligned */ - NUM_FRAMES * opt_xsk_frame_size); - if (ret) - exit_with_error(ret); - + /* Reserve memory for the umem. Use hugepages if unaligned chunk mode */ + bufs = mmap(NULL, NUM_FRAMES * opt_xsk_frame_size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS | opt_mmap_flags, -1, 0); + if (bufs == MAP_FAILED) { + printf("ERROR: mmap failed\n"); + exit(EXIT_FAILURE); + } /* Create sockets... */ umem = xsk_configure_umem(bufs, NUM_FRAMES * opt_xsk_frame_size); xsks[num_socks++] = xsk_configure_socket(umem); -- 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Kevin Laatz Date: Tue, 30 Jul 2019 08:53:59 +0000 Subject: [Intel-wired-lan] [PATCH bpf-next v4 10/11] samples/bpf: use hugepages in xdpsock app In-Reply-To: <20190730085400.10376-1-kevin.laatz@intel.com> References: <20190724051043.14348-1-kevin.laatz@intel.com> <20190730085400.10376-1-kevin.laatz@intel.com> Message-ID: <20190730085400.10376-11-kevin.laatz@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: intel-wired-lan@osuosl.org List-ID: This patch modifies xdpsock to use mmap instead of posix_memalign. With this change, we can use hugepages when running the application in unaligned chunks mode. Using hugepages makes it more likely that we have physically contiguous memory, which supports the unaligned chunk mode better. Signed-off-by: Kevin Laatz --- samples/bpf/xdpsock_user.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c index 62b2059cd0e3..d1c61ec0e697 100644 --- a/samples/bpf/xdpsock_user.c +++ b/samples/bpf/xdpsock_user.c @@ -70,6 +70,7 @@ static int opt_poll; static int opt_interval = 1; static u16 opt_umem_flags; static int opt_unaligned_chunks; +static int opt_mmap_flags; static u32 opt_xdp_bind_flags; static int opt_xsk_frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE; static __u32 prog_id; @@ -435,6 +436,7 @@ static void parse_command_line(int argc, char **argv) case 'u': opt_umem_flags |= XDP_UMEM_UNALIGNED_CHUNK_FLAG; opt_unaligned_chunks = 1; + opt_mmap_flags = MAP_HUGETLB; break; case 'F': opt_xdp_flags &= ~XDP_FLAGS_UPDATE_IF_NOEXIST; @@ -694,11 +696,14 @@ int main(int argc, char **argv) exit(EXIT_FAILURE); } - ret = posix_memalign(&bufs, getpagesize(), /* PAGE_SIZE aligned */ - NUM_FRAMES * opt_xsk_frame_size); - if (ret) - exit_with_error(ret); - + /* Reserve memory for the umem. Use hugepages if unaligned chunk mode */ + bufs = mmap(NULL, NUM_FRAMES * opt_xsk_frame_size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS | opt_mmap_flags, -1, 0); + if (bufs == MAP_FAILED) { + printf("ERROR: mmap failed\n"); + exit(EXIT_FAILURE); + } /* Create sockets... */ umem = xsk_configure_umem(bufs, NUM_FRAMES * opt_xsk_frame_size); xsks[num_socks++] = xsk_configure_socket(umem); -- 2.17.1