From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75F38C433DB for ; Wed, 24 Mar 2021 05:43:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 413C6619E1 for ; Wed, 24 Mar 2021 05:43:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235198AbhCXFnD (ORCPT ); Wed, 24 Mar 2021 01:43:03 -0400 Received: from mga12.intel.com ([192.55.52.136]:61502 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232805AbhCXFmd (ORCPT ); Wed, 24 Mar 2021 01:42:33 -0400 IronPort-SDR: H3BAL5/2LxDdOeN09prRVJ6Rosb0x2i82ghu1ghYyToL3UzkhuIjKUfIyibK7S9Lw7/qH9990w Kfco45hOP6CQ== X-IronPort-AV: E=McAfee;i="6000,8403,9932"; a="169972015" X-IronPort-AV: E=Sophos;i="5.81,272,1610438400"; d="scan'208";a="169972015" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2021 22:42:31 -0700 IronPort-SDR: uY8zEC9EPsgGJ7D6e5j4Zix4CcoOOaky9/lPtfzMmfm1SD2SJ/ltKADe5SflKld3bHpzQ0M5f6 uEXMt3ar5jqg== X-IronPort-AV: E=Sophos;i="5.81,272,1610438400"; d="scan'208";a="514059073" Received: from likexu-mobl1.ccr.corp.intel.com (HELO [10.238.4.93]) ([10.238.4.93]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2021 22:42:27 -0700 Subject: Re: [PATCH v4 RESEND 3/5] perf/x86/lbr: Move cpuc->lbr_xsave allocation out of sleeping region To: Namhyung Kim Cc: Kan Liang , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Thomas Gleixner , Borislav Petkov , x86@kernel.org, linux-kernel References: <20210322060635.821531-1-like.xu@linux.intel.com> <20210322060635.821531-4-like.xu@linux.intel.com> <5fda3599-1b51-5f58-fdcc-2afcf6d4968b@linux.intel.com> From: Like Xu Organization: Intel OTC Message-ID: <103ad691-4ea8-6fcb-afcc-c69e3abcd1f6@linux.intel.com> Date: Wed, 24 Mar 2021 13:42:24 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/3/24 12:04, Namhyung Kim wrote: > On Wed, Mar 24, 2021 at 12:47 PM Like Xu wrote: >> >> Hi Namhyung, >> >> On 2021/3/24 9:32, Namhyung Kim wrote: >>> Hello, >>> >>> On Mon, Mar 22, 2021 at 3:14 PM Like Xu wrote: >>>> +void reserve_lbr_buffers(struct perf_event *event) >>>> +{ >>>> + struct kmem_cache *kmem_cache = x86_get_pmu()->task_ctx_cache; >>>> + struct cpu_hw_events *cpuc; >>>> + int cpu; >>>> + >>>> + if (!static_cpu_has(X86_FEATURE_ARCH_LBR)) >>>> + return; >>>> + >>>> + for_each_possible_cpu(cpu) { >>>> + cpuc = per_cpu_ptr(&cpu_hw_events, cpu); >>>> + if (kmem_cache && !cpuc->lbr_xsave && !event->attr.precise_ip) >>>> + cpuc->lbr_xsave = kmem_cache_alloc(kmem_cache, GFP_KERNEL); >>>> + } >>>> +} >>> >>> I think we should use kmem_cache_alloc_node(). >> >> "kmem_cache_alloc_node - Allocate an object on the specified node" >> >> The reserve_lbr_buffers() is called in __x86_pmu_event_init(). >> When the LBR perf_event is scheduled to another node, it seems >> that we will not call init() and allocate again. >> >> Do you mean use kmem_cache_alloc_node() for each numa_nodes_parsed ? > > I assume cpuc->lbr_xsave will be accessed for that cpu only. > Then it needs to allocate it in the node that cpu belongs to. > Something like below.. > > cpuc->lbr_xsave = kmem_cache_alloc_node(kmem_cache, GFP_KERNEL, > cpu_to_node(cpu)); Thanks, it helps and I will apply it in the next version. > > Thanks, > Namhyung >