From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9173EC43331 for ; Fri, 3 Apr 2020 05:14:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 57EB02063A for ; Fri, 3 Apr 2020 05:14:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 57EB02063A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E7BB78E0008; Fri, 3 Apr 2020 01:14:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E2B7A8E0007; Fri, 3 Apr 2020 01:14:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D19148E0008; Fri, 3 Apr 2020 01:14:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0217.hostedemail.com [216.40.44.217]) by kanga.kvack.org (Postfix) with ESMTP id B7E918E0007 for ; Fri, 3 Apr 2020 01:14:38 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7DB875DD5 for ; Fri, 3 Apr 2020 05:14:38 +0000 (UTC) X-FDA: 76665378636.14.truck76_7fbeacd78280e X-HE-Tag: truck76_7fbeacd78280e X-Filterd-Recvd-Size: 4205 Received: from huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Fri, 3 Apr 2020 05:14:37 +0000 (UTC) Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id B7A21FC5A06C31E14932; Fri, 3 Apr 2020 13:14:32 +0800 (CST) Received: from [127.0.0.1] (10.173.220.25) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.487.0; Fri, 3 Apr 2020 13:14:23 +0800 Subject: Re: [RFC PATCH v5 4/8] mm: tlb: Pass struct mmu_gather to flush_pmd_tlb_range To: Peter Zijlstra CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , References: <20200331142927.1237-1-yezhenyu2@huawei.com> <20200331142927.1237-5-yezhenyu2@huawei.com> <20200331151331.GS20730@hirez.programming.kicks-ass.net> <20200401122004.GE20713@hirez.programming.kicks-ass.net> <53675fb9-21c7-5309-07b8-1bbc1e775f9b@huawei.com> <20200402163849.GM20713@hirez.programming.kicks-ass.net> From: Zhenyu Ye Message-ID: Date: Fri, 3 Apr 2020 13:14:21 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.3.0 MIME-Version: 1.0 In-Reply-To: <20200402163849.GM20713@hirez.programming.kicks-ass.net> Content-Type: text/plain; charset="gbk" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Peter, On 2020/4/3 0:38, Peter Zijlstra wrote: > On Thu, Apr 02, 2020 at 07:24:04PM +0800, Zhenyu Ye wrote: >> Thanks for your detailed explanation. I notice that you used >> `tlb_end_vma` replace `flush_tlb_range`, which will call `tlb_flush`, >> then finally call `flush_tlb_range` in generic code. However, some >> architectures define tlb_end_vma|tlb_flush|flush_tlb_range themselves, >> so this may cause problems. >> >> For example, in s390, it defines: >> >> #define tlb_end_vma(tlb, vma) do { } while (0) >> >> And it doesn't define it's own flush_pmd_tlb_range(). So there will be >> a mistake if we changed flush_pmd_tlb_range() using tlb_end_vma(). >> >> Is this really a problem or something I understand wrong ? > > If tlb_end_vma() is a no-op, then tlb_finish_mmu() will do: > tlb_flush_mmu() -> tlb_flush_mmu_tlbonly() -> tlb_flush() > > And s390 has tlb_flush(). > > If tlb_end_vma() is not a no-op and it calls tlb_flush_mmu_tlbonly(), > then tlb_finish_mmu()'s invocation of tlb_flush_mmu_tlbonly() will > terniate early due o no flags set. > > IOW, it should all just work. > > > FYI the whole tlb_{start,end}_vma() thing is a only needed when the > architecture doesn't implement tlb_flush() and instead default to using > flush_tlb_range(), at which point we need to provide a 'fake' vma. > > At the time I audited all architectures and they only look at VM_EXEC > (to do $I invalidation) and VM_HUGETLB (for pmd level invalidations), > but I forgot which architectures that were. Many architectures, such as alpha, arc, arm and so on. I really understand why you hate making vma->vm_flags more important for tlbi :). > But that is all legacy code; eventually we'll get all archs a native > tlb_flush() and this can go away. > Thanks for your reply. Currently, to enable the TTL feature, extending the flush_*tlb_range() may be more convenient. I will send a formal PATCH soon. Thanks, Zhenyu