From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anatoly Burakov Subject: [PATCH v3 25/68] eal: add function to walk all memsegs Date: Wed, 4 Apr 2018 00:21:37 +0100 Message-ID: <89c78819570fc28e4240241bd513a6ef64f8a7db.1522797505.git.anatoly.burakov@intel.com> References: Cc: keith.wiles@intel.com, jianfeng.tan@intel.com, andras.kovacs@ericsson.com, laszlo.vadkeri@ericsson.com, benjamin.walker@intel.com, bruce.richardson@intel.com, thomas@monjalon.net, konstantin.ananyev@intel.com, kuralamudhan.ramakrishnan@intel.com, louise.m.daly@intel.com, nelio.laranjeiro@6wind.com, yskoh@mellanox.com, pepperjo@japf.ch, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, olivier.matz@6wind.com, shreyansh.jain@nxp.com, gowrishankar.m@linux.vnet.ibm.com To: dev@dpdk.org Return-path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 62B771B892 for ; Wed, 4 Apr 2018 01:22:31 +0200 (CEST) In-Reply-To: In-Reply-To: References: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For code that might need to iterate over list of allocated segments, using this API will make it more resilient to internal API changes and will prevent copying the same iteration code over and over again. Additionally, down the line there will be locking implemented, so users of this API will not need to care about locking either. Signed-off-by: Anatoly Burakov --- lib/librte_eal/common/eal_common_memory.c | 21 +++++++++++++++++++++ lib/librte_eal/common/include/rte_memory.h | 25 +++++++++++++++++++++++++ lib/librte_eal/rte_eal_version.map | 1 + 3 files changed, 47 insertions(+) diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c index 5b8ced4..947db1f 100644 --- a/lib/librte_eal/common/eal_common_memory.c +++ b/lib/librte_eal/common/eal_common_memory.c @@ -218,6 +218,27 @@ rte_mem_lock_page(const void *virt) return mlock((void *)aligned, page_size); } +int __rte_experimental +rte_memseg_walk(rte_memseg_walk_t func, void *arg) +{ + struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; + int i, ret; + + for (i = 0; i < RTE_MAX_MEMSEG; i++) { + const struct rte_memseg *ms = &mcfg->memseg[i]; + + if (ms->addr == NULL) + continue; + + ret = func(ms, arg); + if (ret < 0) + return -1; + if (ret > 0) + return 1; + } + return 0; +} + /* init memory subsystem */ int rte_eal_memory_init(void) diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h index 302f865..93eadaa 100644 --- a/lib/librte_eal/common/include/rte_memory.h +++ b/lib/librte_eal/common/include/rte_memory.h @@ -20,6 +20,7 @@ extern "C" { #endif #include +#include #include __extension__ @@ -130,6 +131,30 @@ phys_addr_t rte_mem_virt2phy(const void *virt); rte_iova_t rte_mem_virt2iova(const void *virt); /** + * Memseg walk function prototype. + * + * Returning 0 will continue walk + * Returning 1 will stop the walk + * Returning -1 will stop the walk and report error + */ +typedef int (*rte_memseg_walk_t)(const struct rte_memseg *ms, void *arg); + +/** + * Walk list of all memsegs. + * + * @param func + * Iterator function + * @param arg + * Argument passed to iterator + * @return + * 0 if walked over the entire list + * 1 if stopped by the user + * -1 if user function reported error + */ +int __rte_experimental +rte_memseg_walk(rte_memseg_walk_t func, void *arg); + +/** * Get the layout of the available physical memory. * * It can be useful for an application to have the full physical diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map index 25e00de..7e9900d 100644 --- a/lib/librte_eal/rte_eal_version.map +++ b/lib/librte_eal/rte_eal_version.map @@ -223,6 +223,7 @@ EXPERIMENTAL { rte_eal_mbuf_user_pool_ops; rte_log_register_type_and_pick_level; rte_malloc_dump_heaps; + rte_memseg_walk; rte_memzone_reserve_contig; rte_memzone_reserve_aligned_contig; rte_memzone_reserve_bounded_contig; -- 2.7.4