From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D2A6C433E2 for ; Fri, 19 Jun 2020 16:24:44 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 2D3A92168B for ; Fri, 19 Jun 2020 16:24:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Np2TA7YY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2D3A92168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 52AE81BFB2; Fri, 19 Jun 2020 18:23:50 +0200 (CEST) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by dpdk.org (Postfix) with ESMTP id 4F7981BFA1 for ; Fri, 19 Jun 2020 18:23:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592583827; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bEYsvCW+vyIix+YFDyxULkm2zyme4u/RiWUnIA/e5lI=; b=Np2TA7YY6OcdxHIPpmmagzWfhw59WHWmVupacDW0MwbFYRwALnuw9k1SNu3thZKk/foJNZ RadkI38OVaZWpd5KDIZDEMl7V4cqxX6Lu+YiWG0nTXawrwcykyX8xav8qG0UKKbxaMTV7o h4jYjsu8HR3Sea3rZSJOUEmYTbzsLrA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-52-sNGbzPoLOaiOivNsPIH7Ow-1; Fri, 19 Jun 2020 12:23:43 -0400 X-MC-Unique: sNGbzPoLOaiOivNsPIH7Ow-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8039D1800D42; Fri, 19 Jun 2020 16:23:41 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.193.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1E3B260CCC; Fri, 19 Jun 2020 16:23:37 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: jerinjacobk@gmail.com, bruce.richardson@intel.com, mdr@ashroe.eu, ktraynor@redhat.com, ian.stokes@intel.com, i.maximets@ovn.org, "Artem V. Andreev" , Andrew Rybchenko Date: Fri, 19 Jun 2020 18:22:44 +0200 Message-Id: <20200619162244.8239-10-david.marchand@redhat.com> In-Reply-To: <20200619162244.8239-1-david.marchand@redhat.com> References: <20200610144506.30505-1-david.marchand@redhat.com> <20200619162244.8239-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v2 9/9] mempool/bucket: handle non-EAL lcores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Convert to new lcore API to support non-EAL lcores. Signed-off-by: David Marchand --- drivers/mempool/bucket/rte_mempool_bucket.c | 131 ++++++++++++-------- 1 file changed, 82 insertions(+), 49 deletions(-) diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c index 5ce1ef16fb..0b4f42d330 100644 --- a/drivers/mempool/bucket/rte_mempool_bucket.c +++ b/drivers/mempool/bucket/rte_mempool_bucket.c @@ -55,6 +55,7 @@ struct bucket_data { struct rte_ring *shared_orphan_ring; struct rte_mempool *pool; unsigned int bucket_mem_size; + void *lcore_callback_handle; }; static struct bucket_stack * @@ -345,6 +346,22 @@ bucket_dequeue_contig_blocks(struct rte_mempool *mp, void **first_obj_table, return 0; } +struct bucket_per_lcore_ctx { + const struct bucket_data *bd; + unsigned int count; +}; + +static int +count_per_lcore(unsigned int lcore_id, void *arg) +{ + struct bucket_per_lcore_ctx *ctx = arg; + + ctx->count += ctx->bd->obj_per_bucket * + ctx->bd->buckets[lcore_id]->top; + ctx->count += rte_ring_count(ctx->bd->adoption_buffer_rings[lcore_id]); + return 0; +} + static void count_underfilled_buckets(struct rte_mempool *mp, void *opaque, @@ -373,23 +390,66 @@ count_underfilled_buckets(struct rte_mempool *mp, static unsigned int bucket_get_count(const struct rte_mempool *mp) { - const struct bucket_data *bd = mp->pool_data; - unsigned int count = - bd->obj_per_bucket * rte_ring_count(bd->shared_bucket_ring) + - rte_ring_count(bd->shared_orphan_ring); - unsigned int i; + struct bucket_per_lcore_ctx ctx; - for (i = 0; i < RTE_MAX_LCORE; i++) { - if (!rte_lcore_is_enabled(i)) - continue; - count += bd->obj_per_bucket * bd->buckets[i]->top + - rte_ring_count(bd->adoption_buffer_rings[i]); - } + ctx.bd = mp->pool_data; + ctx.count = ctx.bd->obj_per_bucket * + rte_ring_count(ctx.bd->shared_bucket_ring); + ctx.count += rte_ring_count(ctx.bd->shared_orphan_ring); + rte_lcore_iterate(count_per_lcore, &ctx); rte_mempool_mem_iter((struct rte_mempool *)(uintptr_t)mp, - count_underfilled_buckets, &count); + count_underfilled_buckets, &ctx.count); + + return ctx.count; +} + +static int +bucket_init_per_lcore(unsigned int lcore_id, void *arg) +{ + char rg_name[RTE_RING_NAMESIZE]; + struct bucket_data *bd = arg; + struct rte_mempool *mp; + int rg_flags; + int rc; + + mp = bd->pool; + bd->buckets[lcore_id] = bucket_stack_create(mp, + mp->size / bd->obj_per_bucket); + if (bd->buckets[lcore_id] == NULL) + goto error; + + rc = snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT ".a%u", + mp->name, lcore_id); + if (rc < 0 || rc >= (int)sizeof(rg_name)) + goto error; + + rg_flags = RING_F_SC_DEQ; + if (mp->flags & MEMPOOL_F_SP_PUT) + rg_flags |= RING_F_SP_ENQ; + if (mp->flags & MEMPOOL_F_SC_GET) + rg_flags |= RING_F_SC_DEQ; + bd->adoption_buffer_rings[lcore_id] = rte_ring_create(rg_name, + rte_align32pow2(mp->size + 1), mp->socket_id, rg_flags); + if (bd->adoption_buffer_rings[lcore_id] == NULL) + goto error; - return count; + return 0; +error: + rte_free(bd->buckets[lcore_id]); + bd->buckets[lcore_id] = NULL; + return -1; +} + +static void +bucket_uninit_per_lcore(unsigned int lcore_id, void *arg) +{ + struct bucket_data *bd = arg; + + rte_ring_free(bd->adoption_buffer_rings[lcore_id]); + bd->adoption_buffer_rings[lcore_id] = NULL; + rte_free(bd->buckets[lcore_id]); + bd->buckets[lcore_id] = NULL; } static int @@ -399,7 +459,6 @@ bucket_alloc(struct rte_mempool *mp) int rc = 0; char rg_name[RTE_RING_NAMESIZE]; struct bucket_data *bd; - unsigned int i; unsigned int bucket_header_size; size_t pg_sz; @@ -429,36 +488,17 @@ bucket_alloc(struct rte_mempool *mp) /* eventually this should be a tunable parameter */ bd->bucket_stack_thresh = (mp->size / bd->obj_per_bucket) * 4 / 3; + bd->lcore_callback_handle = rte_lcore_callback_register("bucket", + bucket_init_per_lcore, bucket_uninit_per_lcore, bd); + if (bd->lcore_callback_handle == NULL) { + rc = -ENOMEM; + goto no_mem_for_stacks; + } + if (mp->flags & MEMPOOL_F_SP_PUT) rg_flags |= RING_F_SP_ENQ; if (mp->flags & MEMPOOL_F_SC_GET) rg_flags |= RING_F_SC_DEQ; - - for (i = 0; i < RTE_MAX_LCORE; i++) { - if (!rte_lcore_is_enabled(i)) - continue; - bd->buckets[i] = - bucket_stack_create(mp, mp->size / bd->obj_per_bucket); - if (bd->buckets[i] == NULL) { - rc = -ENOMEM; - goto no_mem_for_stacks; - } - rc = snprintf(rg_name, sizeof(rg_name), - RTE_MEMPOOL_MZ_FORMAT ".a%u", mp->name, i); - if (rc < 0 || rc >= (int)sizeof(rg_name)) { - rc = -ENAMETOOLONG; - goto no_mem_for_stacks; - } - bd->adoption_buffer_rings[i] = - rte_ring_create(rg_name, rte_align32pow2(mp->size + 1), - mp->socket_id, - rg_flags | RING_F_SC_DEQ); - if (bd->adoption_buffer_rings[i] == NULL) { - rc = -rte_errno; - goto no_mem_for_stacks; - } - } - rc = snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT ".0", mp->name); if (rc < 0 || rc >= (int)sizeof(rg_name)) { @@ -498,11 +538,8 @@ bucket_alloc(struct rte_mempool *mp) rte_ring_free(bd->shared_orphan_ring); cannot_create_shared_orphan_ring: invalid_shared_orphan_ring: + rte_lcore_callback_unregister(bd->lcore_callback_handle); no_mem_for_stacks: - for (i = 0; i < RTE_MAX_LCORE; i++) { - rte_free(bd->buckets[i]); - rte_ring_free(bd->adoption_buffer_rings[i]); - } rte_free(bd); no_mem_for_data: rte_errno = -rc; @@ -512,16 +549,12 @@ bucket_alloc(struct rte_mempool *mp) static void bucket_free(struct rte_mempool *mp) { - unsigned int i; struct bucket_data *bd = mp->pool_data; if (bd == NULL) return; - for (i = 0; i < RTE_MAX_LCORE; i++) { - rte_free(bd->buckets[i]); - rte_ring_free(bd->adoption_buffer_rings[i]); - } + rte_lcore_callback_unregister(bd->lcore_callback_handle); rte_ring_free(bd->shared_orphan_ring); rte_ring_free(bd->shared_bucket_ring); -- 2.23.0