From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E79BBC76190 for ; Tue, 23 Jul 2019 05:38:48 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 807652229A for ; Tue, 23 Jul 2019 05:38:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="QOFN3Bfy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 807652229A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4E60C1BF80; Tue, 23 Jul 2019 07:38:46 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id BC60B1BF80 for ; Tue, 23 Jul 2019 07:38:43 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x6N5Ze6H011979; Mon, 22 Jul 2019 22:38:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=p3308pdPcf0O5mUkWY2LizBTnlIwGWt6Ci3IvuUMj5k=; b=QOFN3BfyNet61kp3TuKuUTwfaQRBWL1JnW8MntfUH4MB6XCYCTHjlJ8652tv9GNaoxmQ XIMNbXWYb0kSDs61hyErMTPWIu5zZ3Ixg4QIQEQ1q3W5Ce+WG9TLiSqwtqlRxuu92h4W IuOqB3AMyIa3sAZ6JNP2E8KWzAfiNFBvQf6ZuUq1Px9YUeWsq8T7wODKWnbG1ptaMwUv Gnrxu1aONJ5BBoYZa2D5shcOre8b2680TVWVDiciiBYE/K3OD+y1FbNSSC4/h9ct7uYo 3jdb1fHLmgx4TkWSPR09aAobSfHOizMr8fjSbB92YE0dlyUgmyjvOTRu/dC81X8k9AOL wA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tv2fj9gv3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 22 Jul 2019 22:38:42 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 22 Jul 2019 22:38:41 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 22 Jul 2019 22:38:40 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id D5B9A3F7041; Mon, 22 Jul 2019 22:38:37 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Tue, 23 Jul 2019 11:08:17 +0530 Message-ID: <20190723053821.30227-2-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190723053821.30227-1-vattunuru@marvell.com> References: <20190717090408.13717-1-vattunuru@marvell.com> <20190723053821.30227-1-vattunuru@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-07-23_03:2019-07-23,2019-07-23 signatures=0 Subject: [dpdk-dev] [PATCH v8 1/5] mempool: populate mempool with page sized chunks of memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Patch adds a routine to populate mempool from page aligned and page sized chunks of memory to ensures memory objs do not fall across the page boundaries. It's useful for applications that require physically contiguous mbuf memory while running in IOVA=VA mode. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K --- lib/librte_mempool/rte_mempool.c | 59 ++++++++++++++++++++++++++++++ lib/librte_mempool/rte_mempool.h | 17 +++++++++ lib/librte_mempool/rte_mempool_version.map | 1 + 3 files changed, 77 insertions(+) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 7260ce0..5312c8f 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -414,6 +414,65 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, return ret; } +/* Function to populate mempool from page sized mem chunks, allocate page size + * of memory in memzone and populate them. Return the number of objects added, + * or a negative value on error. + */ +int +rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp) +{ + char mz_name[RTE_MEMZONE_NAMESIZE]; + size_t align, pg_sz, pg_shift; + const struct rte_memzone *mz; + unsigned int mz_id, n; + size_t chunk_size; + int ret; + + ret = mempool_ops_alloc_once(mp); + if (ret != 0) + return ret; + + if (mp->nb_mem_chunks != 0) + return -EEXIST; + + pg_sz = get_min_page_size(mp->socket_id); + pg_shift = rte_bsf32(pg_sz); + + for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { + + rte_mempool_op_calc_mem_size_default(mp, n, pg_shift, + &chunk_size, &align); + + if (chunk_size > pg_sz) + goto fail; + + ret = snprintf(mz_name, sizeof(mz_name), + RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id); + if (ret < 0 || ret >= (int)sizeof(mz_name)) { + ret = -ENAMETOOLONG; + goto fail; + } + + mz = rte_memzone_reserve_aligned(mz_name, chunk_size, + mp->socket_id, 0, align); + + ret = rte_mempool_populate_iova(mp, mz->addr, + mz->iova, mz->len, + rte_mempool_memchunk_mz_free, + (void *)(uintptr_t)mz); + if (ret < 0) { + rte_memzone_free(mz); + goto fail; + } + } + + return mp->size; + +fail: + rte_mempool_free_memchunks(mp); + return ret; +} + /* Default function to populate the mempool: allocate memory in memzones, * and populate them. Return the number of objects added, or a negative * value on error. diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 8053f7a..73d6ada 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -1064,6 +1064,23 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, /** * Add memory for objects in the pool at init * + * This is the function used to populate the mempool with page aligned memzone + * memory. It ensures all mempool objects being on the page by allocating + * memzones with page size. + * + * @param mp + * A pointer to the mempool structure. + * @return + * The number of objects added on success. + * On error, the chunk is not added in the memory list of the + * mempool and a negative errno is returned. + */ +__rte_experimental +int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp); + +/** + * Add memory for objects in the pool at init + * * This is the default function used by rte_mempool_create() to populate * the mempool. It adds memory allocated using rte_memzone_reserve(). * diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map index 17cbca4..9a6fe65 100644 --- a/lib/librte_mempool/rte_mempool_version.map +++ b/lib/librte_mempool/rte_mempool_version.map @@ -57,4 +57,5 @@ EXPERIMENTAL { global: rte_mempool_ops_get_info; + rte_mempool_populate_from_pg_sz_chunks; }; -- 2.8.4