From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3524C33C9E for ; Tue, 14 Jan 2020 21:05:38 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 5B7FE24658 for ; Tue, 14 Jan 2020 21:05:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="k0CLggBD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B7FE24658 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9C9421C0C4; Tue, 14 Jan 2020 22:05:37 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3D6F71C0C3 for ; Tue, 14 Jan 2020 22:05:36 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00EL21Fr025568; Tue, 14 Jan 2020 13:05:32 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=8B66J8IVBVVHP8sXRTFS7tAj+Lan9M8X/yeEh2hEewA=; b=k0CLggBDDmucCcao2nPGcUm+TolOjda7c/sAZhlUMJi80zqZWYGitBm8oYubt8NrOzC4 0dQ0Wq1WzhUmd//NzWO4CJtSagAHzRMlFjG223YbBZKZBvz/UxL7PMgYSCE+WaIthSit 5WXasn/46D9i9qzrvAtqUmQJ8w9LIa82lWWHqhMnTneqv7yGZj6TPuK5QmEHFFP8W5oJ n+D3N5wCq6mxV/FBruab/k3QLWW6lz9CfQNXWF5+mG/ZVwPc6oijL9U8DF2nYpuabj3u BbGbM2HclxpjCFj+cTolALoDxfFG7hncLtgVkgawdzxa2/R+ZEg4tnXD5SuK/AnYMKww DQ== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2xgng4xchr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 13:05:32 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 14 Jan 2020 13:05:30 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 14 Jan 2020 13:05:30 -0800 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 6EECA3F703F; Tue, 14 Jan 2020 13:05:26 -0800 (PST) From: To: CC: , , , , , , , , , , , , Jerin Jacob Date: Wed, 15 Jan 2020 02:36:03 +0530 Message-ID: <20200114210603.1836160-1-jerinj@marvell.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200113064941.2749356-1-jerinj@marvell.com> References: <20200113064941.2749356-1-jerinj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-01-14_06:2020-01-14, 2020-01-14 signatures=0 Subject: [dpdk-dev] [PATCH v4] mempool: remove memory wastage on non x86 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The existing optimize_object_size() function address the memory object alignment constraint on x86 for better performance. Different (micro) architecture may have different memory alignment constraint for better performance and it not the same as the existing optimize_object_size(). Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution based on the address and some may have a different formula. Introducing arch_mem_object_align() function to abstract the difference between different (micro) architectures to avoid wasting memory for mempool object alignment for the architecture that it is not required to do so. Details on the amount of memory saving: Currently, arm64 based architectures use the default (nchan=4, nrank=1). The worst case is for an object whose size (including mempool header) is 2 cache lines, where it is optimized to 3 cache lines (+50%). Examples for cache lines size = 64: orig optimized 64 -> 64 +0% 128 -> 192 +50% 192 -> 192 +0% 256 -> 320 +25% 320 -> 320 +0% 384 -> 448 +16% ... 2304 -> 2368 +2.7% (~mbuf size) Additional details: https://www.mail-archive.com/dev@dpdk.org/msg149157.html Signed-off-by: Jerin Jacob Reviewed-by: Gavin Hu --- v4: - s/X86/x86 in the doc/guides/prog_guide/mempool_lib.rst file (David) - Remove Fixes and CC: stable@dpdk.org as well, as this is just a improvement.(David) - Udpate the patch heading to remove checkpatch warning due to above change. v3: - Change comment for MEMPOOL_F_NO_SPREAD flag as " Spreading among memory channels not required." (Stephen Hemminger) v2: - Changed the return type of arch_mem_object_align() to "unsigned int" from "unsigned" to fix the checkpatch issues (Olivier Matz) - Updated the comments for MEMPOOL_F_NO_SPREAD (Olivier Matz) - Update the git comments to share the memory saving details. doc/guides/prog_guide/mempool_lib.rst | 6 +++--- lib/librte_mempool/rte_mempool.c | 17 +++++++++++++---- lib/librte_mempool/rte_mempool.h | 3 ++- 3 files changed, 18 insertions(+), 8 deletions(-) diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst index 3bb84b0a6..f8b430d65 100644 --- a/doc/guides/prog_guide/mempool_lib.rst +++ b/doc/guides/prog_guide/mempool_lib.rst @@ -27,10 +27,10 @@ In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), statistics about get from/put in the pool are stored in the mempool structure. Statistics are per-lcore to avoid concurrent access to statistics counters. -Memory Alignment Constraints ----------------------------- +Memory Alignment Constraints on x86 architecture +------------------------------------------------ -Depending on hardware memory configuration, performance can be greatly improved by adding a specific padding between objects. +Depending on hardware memory configuration on X86 architecture, performance can be greatly improved by adding a specific padding between objects. The objective is to ensure that the beginning of each object starts on a different channel and rank in memory so that all channels are equally loaded. This is particularly true for packet buffers when doing L3 forwarding or flow classification. diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 78d8eb941..1909998e8 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -45,6 +45,7 @@ EAL_REGISTER_TAILQ(rte_mempool_tailq) #define CALC_CACHE_FLUSHTHRESH(c) \ ((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) +#if defined(RTE_ARCH_X86) /* * return the greatest common divisor between a and b (fast algorithm) * @@ -74,12 +75,13 @@ static unsigned get_gcd(unsigned a, unsigned b) } /* - * Depending on memory configuration, objects addresses are spread + * Depending on memory configuration on x86 arch, objects addresses are spread * between channels and ranks in RAM: the pool allocator will add * padding between objects. This function return the new size of the * object. */ -static unsigned optimize_object_size(unsigned obj_size) +static unsigned int +arch_mem_object_align(unsigned int obj_size) { unsigned nrank, nchan; unsigned new_obj_size; @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned obj_size) new_obj_size++; return new_obj_size * RTE_MEMPOOL_ALIGN; } +#else +static unsigned int +arch_mem_object_align(unsigned int obj_size) +{ + return obj_size; +} +#endif struct pagesz_walk_arg { int socket_id; @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, */ if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { unsigned new_size; - new_size = optimize_object_size(sz->header_size + sz->elt_size + - sz->trailer_size); + new_size = arch_mem_object_align + (sz->header_size + sz->elt_size + sz->trailer_size); sz->trailer_size = new_size - sz->header_size - sz->elt_size; } diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index f81152af9..f48c94def 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -260,7 +260,8 @@ struct rte_mempool { #endif } __rte_cache_aligned; -#define MEMPOOL_F_NO_SPREAD 0x0001 /**< Do not spread among memory channels. */ +#define MEMPOOL_F_NO_SPREAD 0x0001 +/**< Spreading among memory channels not required. */ #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/ #define MEMPOOL_F_SP_PUT 0x0004 /**< Default put is "single-producer".*/ #define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/ -- 2.24.1