From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13AE4CCA480 for ; Tue, 28 Jun 2022 10:00:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344667AbiF1KAU (ORCPT ); Tue, 28 Jun 2022 06:00:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344106AbiF1J76 (ORCPT ); Tue, 28 Jun 2022 05:59:58 -0400 Received: from mail-lf1-x14a.google.com (mail-lf1-x14a.google.com [IPv6:2a00:1450:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C84E2F673 for ; Tue, 28 Jun 2022 02:59:33 -0700 (PDT) Received: by mail-lf1-x14a.google.com with SMTP id o7-20020a056512230700b004810a865709so3671746lfu.3 for ; Tue, 28 Jun 2022 02:59:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YoFBOjNJrDqt8mzIFXF65D7zaWvuhvYLEYNKHK+PaLc=; b=a9Qz/7hR75sQU9O6oZwdzmtBNRvu2wowsSw3yC8AvgcexKD0hQWYnfXob1A3xoOwi9 Yi+cnFhEieKcMLvDMww1D20eAdWH1Wyk/XN8Rzeg+dwA2ErW9BqE1LFWET/6SIgBJyA/ RhXBxGRh19Auys0GMO1QM33iLQEia3IjPVCVD6opO4snspydBgkfwJdEekoc7JkJEDQ0 1OckSSoBXHAGNl1/JZ665dtYgPmMSOe5IINDdBT6rkv5RFzk84l/3ebNWC7otxmjDsFV Z5NUcKG05acFW8EN2I6SmSCuKXdshC0ytIpKZOF3VozL7+PvODwKc2FpSF1Y1QCG9MNs WTXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YoFBOjNJrDqt8mzIFXF65D7zaWvuhvYLEYNKHK+PaLc=; b=Q+OEa5/MQTh+UexGy7+wX8l5wFdwsF3gWwQtkVMManFXZ7e6dJ+SIScnrIbDvdIAvW 1R6f3DTQ+YgGNrEfIuOon/5RUX03qiuw8Pe2eId/6fxDMDODRtLVFQnZ7VkFqOXAl+4e wUco4UpsQVlPGnuKsf2nwOPryV1fSsRlnrUMO1cKndGejD3NcZblnoWLS+Xdb9MGJyza B8ZnTbKmZNETFSvcluZSQCXz4ha54xjPYt9l9GammjFj26zvaNH9vzuqGwYzh+2/H8v0 5eKIlMRNH3PzVGzSeolegItxxryMGSwjnZku57vm/fz0PT42tcL7r7dELHDAXmvPzJ1a QfIQ== X-Gm-Message-State: AJIora8iZq9RUZdsLcRRZqLOZCilu2cDWOLQxIPWESRrR5+5lY0RfZZi DSqWAhT3krOfqOD0fy8+dtWy4szTHA== X-Google-Smtp-Source: AGRyM1spV49Opldetgxz0cutBdhp70Cu++LbcV8eigThbdlzZHbfBXCFjFKzXcX+JquMN0U3x+uZt5tWnw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:3496:744e:315a:b41b]) (user=elver job=sendgmr) by 2002:a05:6512:32c5:b0:481:1822:c41f with SMTP id f5-20020a05651232c500b004811822c41fmr7349560lfg.373.1656410371551; Tue, 28 Jun 2022 02:59:31 -0700 (PDT) Date: Tue, 28 Jun 2022 11:58:32 +0200 In-Reply-To: <20220628095833.2579903-1-elver@google.com> Message-Id: <20220628095833.2579903-13-elver@google.com> Mime-Version: 1.0 References: <20220628095833.2579903-1-elver@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [PATCH v2 12/13] perf/hw_breakpoint: Optimize max_bp_pinned_slots() for CPU-independent task targets From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Cc: Thomas Gleixner , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Dmitry Vyukov , Michael Ellerman , linuxppc-dev@lists.ozlabs.org, linux-perf-users@vger.kernel.org, x86@kernel.org, linux-sh@vger.kernel.org, kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Running the perf benchmark with (note: more aggressive parameters vs. preceding changes, but same 256 CPUs host): | $> perf bench -r 100 breakpoint thread -b 4 -p 128 -t 512 | # Running 'breakpoint/thread' benchmark: | # Created/joined 100 threads with 4 breakpoints and 128 parallelism | Total time: 1.989 [sec] | | 38.854160 usecs/op | 4973.332500 usecs/op/cpu 20.43% [kernel] [k] queued_spin_lock_slowpath 18.75% [kernel] [k] osq_lock 16.98% [kernel] [k] rhashtable_jhash2 8.34% [kernel] [k] task_bp_pinned 4.23% [kernel] [k] smp_cfm_core_cond 3.65% [kernel] [k] bcmp 2.83% [kernel] [k] toggle_bp_slot 1.87% [kernel] [k] find_next_bit 1.49% [kernel] [k] __reserve_bp_slot We can see that a majority of the time is now spent hashing task pointers to index into task_bps_ht in task_bp_pinned(). Obtaining the max_bp_pinned_slots() for CPU-independent task targets currently is O(#cpus), and calls task_bp_pinned() for each CPU, even if the result of task_bp_pinned() is CPU-independent. The loop in max_bp_pinned_slots() wants to compute the maximum slots across all CPUs. If task_bp_pinned() is CPU-independent, we can do so by obtaining the max slots across all CPUs and adding task_bp_pinned(). To do so in O(1), use a bp_slots_histogram for CPU-pinned slots. After this optimization: | $> perf bench -r 100 breakpoint thread -b 4 -p 128 -t 512 | # Running 'breakpoint/thread' benchmark: | # Created/joined 100 threads with 4 breakpoints and 128 parallelism | Total time: 1.930 [sec] | | 37.697832 usecs/op | 4825.322500 usecs/op/cpu 19.13% [kernel] [k] queued_spin_lock_slowpath 18.21% [kernel] [k] rhashtable_jhash2 15.46% [kernel] [k] osq_lock 6.27% [kernel] [k] toggle_bp_slot 5.91% [kernel] [k] task_bp_pinned 5.05% [kernel] [k] smp_cfm_core_cond 1.78% [kernel] [k] update_sg_lb_stats 1.36% [kernel] [k] llist_reverse_order 1.34% [kernel] [k] find_next_bit 1.19% [kernel] [k] bcmp Suggesting that time spent in task_bp_pinned() has been reduced. However, we're still hashing too much, which will be addressed in the subsequent change. Signed-off-by: Marco Elver --- v2: * New patch. --- kernel/events/hw_breakpoint.c | 45 +++++++++++++++++++++++++++++++---- 1 file changed, 41 insertions(+), 4 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 18886f115abc..b5180a2ccfbf 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -64,6 +64,9 @@ static struct bp_cpuinfo *get_bp_info(int cpu, enum bp_type_idx type) return per_cpu_ptr(bp_cpuinfo + type, cpu); } +/* Number of pinned CPU breakpoints globally. */ +static struct bp_slots_histogram cpu_pinned[TYPE_MAX]; + /* Keep track of the breakpoints attached to tasks */ static struct rhltable task_bps_ht; static const struct rhashtable_params task_bps_ht_params = { @@ -194,6 +197,10 @@ static __init int init_breakpoint_slots(void) goto err; } } + for (i = 0; i < TYPE_MAX; i++) { + if (!bp_slots_histogram_alloc(&cpu_pinned[i], i)) + goto err; + } return 0; err: @@ -203,6 +210,8 @@ static __init int init_breakpoint_slots(void) if (err_cpu == cpu) break; } + for (i = 0; i < TYPE_MAX; i++) + bp_slots_histogram_free(&cpu_pinned[i]); return -ENOMEM; } @@ -270,6 +279,9 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) /* * Count the number of breakpoints of the same type and same task. * The given event must be not on the list. + * + * If @cpu is -1, but the result of task_bp_pinned() is not CPU-independent, + * returns a negative value. */ static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) { @@ -288,9 +300,18 @@ static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) goto out; rhl_for_each_entry_rcu(iter, pos, head, hw.bp_list) { - if (find_slot_idx(iter->attr.bp_type) == type && - (iter->cpu < 0 || cpu == iter->cpu)) - count += hw_breakpoint_weight(iter); + if (find_slot_idx(iter->attr.bp_type) != type) + continue; + + if (iter->cpu >= 0) { + if (cpu == -1) { + count = -1; + goto out; + } else if (cpu != iter->cpu) + continue; + } + + count += hw_breakpoint_weight(iter); } out: @@ -316,6 +337,19 @@ max_bp_pinned_slots(struct perf_event *bp, enum bp_type_idx type) int pinned_slots = 0; int cpu; + if (bp->hw.target && bp->cpu < 0) { + int max_pinned = task_bp_pinned(-1, bp, type); + + if (max_pinned >= 0) { + /* + * Fast path: task_bp_pinned() is CPU-independent and + * returns the same value for any CPU. + */ + max_pinned += bp_slots_histogram_max(&cpu_pinned[type], type); + return max_pinned; + } + } + for_each_cpu(cpu, cpumask) { struct bp_cpuinfo *info = get_bp_info(cpu, type); int nr; @@ -366,8 +400,11 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, /* Pinned counter cpu profiling */ if (!bp->hw.target) { + struct bp_cpuinfo *info = get_bp_info(bp->cpu, type); + lockdep_assert_held_write(&bp_cpuinfo_sem); - get_bp_info(bp->cpu, type)->cpu_pinned += weight; + bp_slots_histogram_add(&cpu_pinned[type], info->cpu_pinned, weight); + info->cpu_pinned += weight; return 0; } -- 2.37.0.rc0.161.g10f37bed90-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B9DFFC43334 for ; Tue, 28 Jun 2022 10:06:38 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4LXKxn2S81z3f9D for ; Tue, 28 Jun 2022 20:06:37 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=a9Qz/7hR; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=flex--elver.bounces.google.com (client-ip=2a00:1450:4864:20::149; helo=mail-lf1-x149.google.com; envelope-from=3a9g6ygukdkwqxhqdsaasxq.oayxuzgjbbo-pqhxuefe.alxmne.ads@flex--elver.bounces.google.com; receiver=) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=a9Qz/7hR; dkim-atps=neutral Received: from mail-lf1-x149.google.com (mail-lf1-x149.google.com [IPv6:2a00:1450:4864:20::149]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4LXKng5rhsz3c9D for ; Tue, 28 Jun 2022 19:59:35 +1000 (AEST) Received: by mail-lf1-x149.google.com with SMTP id z13-20020a056512308d00b004811694f893so2927317lfd.6 for ; Tue, 28 Jun 2022 02:59:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YoFBOjNJrDqt8mzIFXF65D7zaWvuhvYLEYNKHK+PaLc=; b=a9Qz/7hR75sQU9O6oZwdzmtBNRvu2wowsSw3yC8AvgcexKD0hQWYnfXob1A3xoOwi9 Yi+cnFhEieKcMLvDMww1D20eAdWH1Wyk/XN8Rzeg+dwA2ErW9BqE1LFWET/6SIgBJyA/ RhXBxGRh19Auys0GMO1QM33iLQEia3IjPVCVD6opO4snspydBgkfwJdEekoc7JkJEDQ0 1OckSSoBXHAGNl1/JZ665dtYgPmMSOe5IINDdBT6rkv5RFzk84l/3ebNWC7otxmjDsFV Z5NUcKG05acFW8EN2I6SmSCuKXdshC0ytIpKZOF3VozL7+PvODwKc2FpSF1Y1QCG9MNs WTXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YoFBOjNJrDqt8mzIFXF65D7zaWvuhvYLEYNKHK+PaLc=; b=J7PW1fosjbSvPfJHX57B7tNr5hIVbR1j8Eosaga8KaHICtHdzgGIBH+EHSx4Dls6bf 2OjUcUDKcnt/imR7EN9lVwC8rkiOfC8u6giEtuvmt/ncjgWqtPJdbQjYvg6GKTdQpQeu agSLyxfqqiwt8o0Df/rE+Z+IzkoudNiKAynjT3/s5Wkn/KmFaMlyx24f2M6ow9eG7pX1 U2vnZNKaJcKCRKghq1JlqZikByyHiziHFTwfv7zk36IViNJ0f1gnRjRD/CMwLKX/mK/L 1BUquRT8rzdnZyUOLGJgc0hhN3RoizxdtKZ3BNjIhGJdogG6mOKPSHic5A+imD+L5hU3 ef9g== X-Gm-Message-State: AJIora+NGVE8df1Z9rEcRSahb2mwKdRe8gtp57ZTUAeI3S44AgXwhe5h Dll079PWs9e2zXNr65n5pY0yuVbeGw== X-Google-Smtp-Source: AGRyM1spV49Opldetgxz0cutBdhp70Cu++LbcV8eigThbdlzZHbfBXCFjFKzXcX+JquMN0U3x+uZt5tWnw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:3496:744e:315a:b41b]) (user=elver job=sendgmr) by 2002:a05:6512:32c5:b0:481:1822:c41f with SMTP id f5-20020a05651232c500b004811822c41fmr7349560lfg.373.1656410371551; Tue, 28 Jun 2022 02:59:31 -0700 (PDT) Date: Tue, 28 Jun 2022 11:58:32 +0200 In-Reply-To: <20220628095833.2579903-1-elver@google.com> Message-Id: <20220628095833.2579903-13-elver@google.com> Mime-Version: 1.0 References: <20220628095833.2579903-1-elver@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [PATCH v2 12/13] perf/hw_breakpoint: Optimize max_bp_pinned_slots() for CPU-independent task targets From: Marco Elver To: elver@google.com, Peter Zijlstra , Frederic Weisbecker , Ingo Molnar Content-Type: text/plain; charset="UTF-8" X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , linux-sh@vger.kernel.org, Alexander Shishkin , x86@kernel.org, linuxppc-dev@lists.ozlabs.org, Arnaldo Carvalho de Melo , linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, Namhyung Kim , Thomas Gleixner , Jiri Olsa , Dmitry Vyukov Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" Running the perf benchmark with (note: more aggressive parameters vs. preceding changes, but same 256 CPUs host): | $> perf bench -r 100 breakpoint thread -b 4 -p 128 -t 512 | # Running 'breakpoint/thread' benchmark: | # Created/joined 100 threads with 4 breakpoints and 128 parallelism | Total time: 1.989 [sec] | | 38.854160 usecs/op | 4973.332500 usecs/op/cpu 20.43% [kernel] [k] queued_spin_lock_slowpath 18.75% [kernel] [k] osq_lock 16.98% [kernel] [k] rhashtable_jhash2 8.34% [kernel] [k] task_bp_pinned 4.23% [kernel] [k] smp_cfm_core_cond 3.65% [kernel] [k] bcmp 2.83% [kernel] [k] toggle_bp_slot 1.87% [kernel] [k] find_next_bit 1.49% [kernel] [k] __reserve_bp_slot We can see that a majority of the time is now spent hashing task pointers to index into task_bps_ht in task_bp_pinned(). Obtaining the max_bp_pinned_slots() for CPU-independent task targets currently is O(#cpus), and calls task_bp_pinned() for each CPU, even if the result of task_bp_pinned() is CPU-independent. The loop in max_bp_pinned_slots() wants to compute the maximum slots across all CPUs. If task_bp_pinned() is CPU-independent, we can do so by obtaining the max slots across all CPUs and adding task_bp_pinned(). To do so in O(1), use a bp_slots_histogram for CPU-pinned slots. After this optimization: | $> perf bench -r 100 breakpoint thread -b 4 -p 128 -t 512 | # Running 'breakpoint/thread' benchmark: | # Created/joined 100 threads with 4 breakpoints and 128 parallelism | Total time: 1.930 [sec] | | 37.697832 usecs/op | 4825.322500 usecs/op/cpu 19.13% [kernel] [k] queued_spin_lock_slowpath 18.21% [kernel] [k] rhashtable_jhash2 15.46% [kernel] [k] osq_lock 6.27% [kernel] [k] toggle_bp_slot 5.91% [kernel] [k] task_bp_pinned 5.05% [kernel] [k] smp_cfm_core_cond 1.78% [kernel] [k] update_sg_lb_stats 1.36% [kernel] [k] llist_reverse_order 1.34% [kernel] [k] find_next_bit 1.19% [kernel] [k] bcmp Suggesting that time spent in task_bp_pinned() has been reduced. However, we're still hashing too much, which will be addressed in the subsequent change. Signed-off-by: Marco Elver --- v2: * New patch. --- kernel/events/hw_breakpoint.c | 45 +++++++++++++++++++++++++++++++---- 1 file changed, 41 insertions(+), 4 deletions(-) diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c index 18886f115abc..b5180a2ccfbf 100644 --- a/kernel/events/hw_breakpoint.c +++ b/kernel/events/hw_breakpoint.c @@ -64,6 +64,9 @@ static struct bp_cpuinfo *get_bp_info(int cpu, enum bp_type_idx type) return per_cpu_ptr(bp_cpuinfo + type, cpu); } +/* Number of pinned CPU breakpoints globally. */ +static struct bp_slots_histogram cpu_pinned[TYPE_MAX]; + /* Keep track of the breakpoints attached to tasks */ static struct rhltable task_bps_ht; static const struct rhashtable_params task_bps_ht_params = { @@ -194,6 +197,10 @@ static __init int init_breakpoint_slots(void) goto err; } } + for (i = 0; i < TYPE_MAX; i++) { + if (!bp_slots_histogram_alloc(&cpu_pinned[i], i)) + goto err; + } return 0; err: @@ -203,6 +210,8 @@ static __init int init_breakpoint_slots(void) if (err_cpu == cpu) break; } + for (i = 0; i < TYPE_MAX; i++) + bp_slots_histogram_free(&cpu_pinned[i]); return -ENOMEM; } @@ -270,6 +279,9 @@ static unsigned int max_task_bp_pinned(int cpu, enum bp_type_idx type) /* * Count the number of breakpoints of the same type and same task. * The given event must be not on the list. + * + * If @cpu is -1, but the result of task_bp_pinned() is not CPU-independent, + * returns a negative value. */ static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) { @@ -288,9 +300,18 @@ static int task_bp_pinned(int cpu, struct perf_event *bp, enum bp_type_idx type) goto out; rhl_for_each_entry_rcu(iter, pos, head, hw.bp_list) { - if (find_slot_idx(iter->attr.bp_type) == type && - (iter->cpu < 0 || cpu == iter->cpu)) - count += hw_breakpoint_weight(iter); + if (find_slot_idx(iter->attr.bp_type) != type) + continue; + + if (iter->cpu >= 0) { + if (cpu == -1) { + count = -1; + goto out; + } else if (cpu != iter->cpu) + continue; + } + + count += hw_breakpoint_weight(iter); } out: @@ -316,6 +337,19 @@ max_bp_pinned_slots(struct perf_event *bp, enum bp_type_idx type) int pinned_slots = 0; int cpu; + if (bp->hw.target && bp->cpu < 0) { + int max_pinned = task_bp_pinned(-1, bp, type); + + if (max_pinned >= 0) { + /* + * Fast path: task_bp_pinned() is CPU-independent and + * returns the same value for any CPU. + */ + max_pinned += bp_slots_histogram_max(&cpu_pinned[type], type); + return max_pinned; + } + } + for_each_cpu(cpu, cpumask) { struct bp_cpuinfo *info = get_bp_info(cpu, type); int nr; @@ -366,8 +400,11 @@ toggle_bp_slot(struct perf_event *bp, bool enable, enum bp_type_idx type, /* Pinned counter cpu profiling */ if (!bp->hw.target) { + struct bp_cpuinfo *info = get_bp_info(bp->cpu, type); + lockdep_assert_held_write(&bp_cpuinfo_sem); - get_bp_info(bp->cpu, type)->cpu_pinned += weight; + bp_slots_histogram_add(&cpu_pinned[type], info->cpu_pinned, weight); + info->cpu_pinned += weight; return 0; } -- 2.37.0.rc0.161.g10f37bed90-goog