From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA779C43217 for ; Tue, 15 Nov 2022 06:44:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232394AbiKOGo4 (ORCPT ); Tue, 15 Nov 2022 01:44:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230520AbiKOGoz (ORCPT ); Tue, 15 Nov 2022 01:44:55 -0500 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D958C1836F; Mon, 14 Nov 2022 22:44:53 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R271e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=31;SR=0;TI=SMTPD_---0VUsFbJs_1668494685; Received: from 30.240.98.93(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0VUsFbJs_1668494685) by smtp.aliyun-inc.com; Tue, 15 Nov 2022 14:44:48 +0800 Message-ID: Date: Tue, 15 Nov 2022 14:44:44 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.4.1 Subject: Re: [PATCH v6 1/2] mm/tlbbatch: Introduce arch_tlbbatch_should_defer() To: Yicong Yang , akpm@linux-foundation.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, catalin.marinas@arm.com, will@kernel.org, anshuman.khandual@arm.com, linux-doc@vger.kernel.org Cc: corbet@lwn.net, peterz@infradead.org, arnd@arndb.de, punit.agrawal@bytedance.com, linux-kernel@vger.kernel.org, darren@os.amperecomputing.com, yangyicong@hisilicon.com, huzhanyuan@oppo.com, lipeifeng@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, realmz6@gmail.com, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, Barry Song <21cnbao@gmail.com>, wangkefeng.wang@huawei.com, prime.zeng@hisilicon.com, Anshuman Khandual , Barry Song References: <20221115031425.44640-1-yangyicong@huawei.com> <20221115031425.44640-2-yangyicong@huawei.com> From: haoxin In-Reply-To: <20221115031425.44640-2-yangyicong@huawei.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org 在 2022/11/15 上午11:14, Yicong Yang 写道: > From: Anshuman Khandual > > The entire scheme of deferred TLB flush in reclaim path rests on the > fact that the cost to refill TLB entries is less than flushing out > individual entries by sending IPI to remote CPUs. But architecture > can have different ways to evaluate that. Hence apart from checking > TTU_BATCH_FLUSH in the TTU flags, rest of the decision should be > architecture specific. > > Signed-off-by: Anshuman Khandual > [https://lore.kernel.org/linuxppc-dev/20171101101735.2318-2-khandual@linux.vnet.ibm.com/] > Signed-off-by: Yicong Yang > [Rebase and fix incorrect return value type] > Reviewed-by: Kefeng Wang > Reviewed-by: Anshuman Khandual > Reviewed-by: Barry Song > Tested-by: Punit Agrawal > --- > arch/x86/include/asm/tlbflush.h | 12 ++++++++++++ > mm/rmap.c | 9 +-------- > 2 files changed, 13 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h > index cda3118f3b27..8a497d902c16 100644 > --- a/arch/x86/include/asm/tlbflush.h > +++ b/arch/x86/include/asm/tlbflush.h > @@ -240,6 +240,18 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) > flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false); > } > > +static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) > +{ > + bool should_defer = false; > + > + /* If remote CPUs need to be flushed then defer batch the flush */ > + if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) > + should_defer = true; > + put_cpu(); > + > + return should_defer; > +} > + > static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) > { > /* > diff --git a/mm/rmap.c b/mm/rmap.c > index 2ec925e5fa6a..a9ab10bc0144 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -685,17 +685,10 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) > */ > static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) > { > - bool should_defer = false; > - > if (!(flags & TTU_BATCH_FLUSH)) > return false; > > - /* If remote CPUs need to be flushed then defer batch the flush */ > - if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) > - should_defer = true; > - put_cpu(); > - > - return should_defer; > + return arch_tlbbatch_should_defer(mm); > } > LGTM, thanks Reviewed-by: Xin Hao > /*