From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=DATE_IN_PAST_06_12, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7880C76192 for ; Tue, 16 Jul 2019 11:22:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BB76D2053B for ; Tue, 16 Jul 2019 11:22:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387773AbfGPLWP (ORCPT ); Tue, 16 Jul 2019 07:22:15 -0400 Received: from mga01.intel.com ([192.55.52.88]:34210 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387768AbfGPLWO (ORCPT ); Tue, 16 Jul 2019 07:22:14 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Jul 2019 04:22:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,498,1557212400"; d="scan'208";a="366631642" Received: from silpixa00399838.ir.intel.com (HELO silpixa00399838.ger.corp.intel.com) ([10.237.223.10]) by fmsmga006.fm.intel.com with ESMTP; 16 Jul 2019 04:22:11 -0700 From: Kevin Laatz To: netdev@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, bjorn.topel@intel.com, magnus.karlsson@intel.com, jakub.kicinski@netronome.com, jonathan.lemon@gmail.com Cc: bruce.richardson@intel.com, ciara.loftus@intel.com, bpf@vger.kernel.org, intel-wired-lan@lists.osuosl.org, Kevin Laatz Subject: [PATCH v2 10/10] doc/af_xdp: include unaligned chunk case Date: Tue, 16 Jul 2019 03:06:37 +0000 Message-Id: <20190716030637.5634-11-kevin.laatz@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190716030637.5634-1-kevin.laatz@intel.com> References: <20190620090958.2135-1-kevin.laatz@intel.com> <20190716030637.5634-1-kevin.laatz@intel.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org The addition of unaligned chunks mode, the documentation needs to be updated to indicate that the incoming addr to the fill ring will only be masked if the user application is run in the aligned chunk mode. This patch also adds a line to explicitly indicate that the incoming addr will not be masked if running the user application in the unaligned chunk mode. Signed-off-by: Kevin Laatz --- Documentation/networking/af_xdp.rst | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/Documentation/networking/af_xdp.rst b/Documentation/networking/af_xdp.rst index eeedc2e826aa..83f7ae5fc045 100644 --- a/Documentation/networking/af_xdp.rst +++ b/Documentation/networking/af_xdp.rst @@ -153,10 +153,12 @@ an example, if the UMEM is 64k and each chunk is 4k, then the UMEM has Frames passed to the kernel are used for the ingress path (RX rings). -The user application produces UMEM addrs to this ring. Note that the -kernel will mask the incoming addr. E.g. for a chunk size of 2k, the -log2(2048) LSB of the addr will be masked off, meaning that 2048, 2050 -and 3000 refers to the same chunk. +The user application produces UMEM addrs to this ring. Note that, if +running the application with aligned chunk mode, the kernel will mask +the incoming addr. E.g. for a chunk size of 2k, the log2(2048) LSB of +the addr will be masked off, meaning that 2048, 2050 and 3000 refers +to the same chunk. If the user application is run in the unaligned +chunks mode, then the incoming addr will be left untouched. UMEM Completion Ring -- 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Kevin Laatz Date: Tue, 16 Jul 2019 03:06:37 +0000 Subject: [Intel-wired-lan] [PATCH v2 10/10] doc/af_xdp: include unaligned chunk case In-Reply-To: <20190716030637.5634-1-kevin.laatz@intel.com> References: <20190620090958.2135-1-kevin.laatz@intel.com> <20190716030637.5634-1-kevin.laatz@intel.com> Message-ID: <20190716030637.5634-11-kevin.laatz@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: intel-wired-lan@osuosl.org List-ID: The addition of unaligned chunks mode, the documentation needs to be updated to indicate that the incoming addr to the fill ring will only be masked if the user application is run in the aligned chunk mode. This patch also adds a line to explicitly indicate that the incoming addr will not be masked if running the user application in the unaligned chunk mode. Signed-off-by: Kevin Laatz --- Documentation/networking/af_xdp.rst | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/Documentation/networking/af_xdp.rst b/Documentation/networking/af_xdp.rst index eeedc2e826aa..83f7ae5fc045 100644 --- a/Documentation/networking/af_xdp.rst +++ b/Documentation/networking/af_xdp.rst @@ -153,10 +153,12 @@ an example, if the UMEM is 64k and each chunk is 4k, then the UMEM has Frames passed to the kernel are used for the ingress path (RX rings). -The user application produces UMEM addrs to this ring. Note that the -kernel will mask the incoming addr. E.g. for a chunk size of 2k, the -log2(2048) LSB of the addr will be masked off, meaning that 2048, 2050 -and 3000 refers to the same chunk. +The user application produces UMEM addrs to this ring. Note that, if +running the application with aligned chunk mode, the kernel will mask +the incoming addr. E.g. for a chunk size of 2k, the log2(2048) LSB of +the addr will be masked off, meaning that 2048, 2050 and 3000 refers +to the same chunk. If the user application is run in the unaligned +chunks mode, then the incoming addr will be left untouched. UMEM Completion Ring -- 2.17.1