From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B0A7C25B0E for ; Fri, 19 Aug 2022 07:26:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E96FB8D0003; Fri, 19 Aug 2022 03:26:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E46658D0002; Fri, 19 Aug 2022 03:26:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE6F58D0003; Fri, 19 Aug 2022 03:26:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id BF3218D0002 for ; Fri, 19 Aug 2022 03:26:40 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 92D6B14015C for ; Fri, 19 Aug 2022 07:26:40 +0000 (UTC) X-FDA: 79815509718.08.274F58C Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf08.hostedemail.com (Postfix) with ESMTP id 8E1D6160013 for ; Fri, 19 Aug 2022 07:26:39 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4M8CvL03zSzGpgn; Fri, 19 Aug 2022 15:25:02 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Fri, 19 Aug 2022 15:26:36 +0800 Subject: Re: [PATCH 4/6] mm: hugetlb_vmemmap: add missing smp_wmb() before set_pte_at() To: Muchun Song , Yin Fengwei CC: Andrew Morton , Mike Kravetz , Muchun Song , Linux MM , References: <20220816130553.31406-1-linmiaohe@huawei.com> <20220816130553.31406-5-linmiaohe@huawei.com> <0EAF1279-6A1C-41FA-9A32-414C36B3792A@linux.dev> <019c1272-9d01-9d51-91a0-2d656b25c318@intel.com> <18adbf89-473e-7ba6-9a2b-522e1592bdc6@huawei.com> <9c791de0-b702-1bbe-38a4-30e87d9d1b95@intel.com> <931536E2-3948-40AB-88A7-E36F67954AAA@linux.dev> <7be98c64-88a1-3bee-7f92-67bb1f4f495b@huawei.com> <3B1463C2-9DC4-43D0-93EC-2D2334A20502@linux.dev> <7fa5b2b2-dcef-f264-7932-c4fdbb9619d0@intel.com> <7408156a-f708-5e73-d0a2-69b1acca9b96@intel.com> <15DD6DCA-39BC-4EA2-984F-D488E94CC4FF@linux.dev> <615c8ec8-6977-2ce0-f049-d2ec1619245c@huawei.com> From: Miaohe Lin Message-ID: Date: Fri, 19 Aug 2022 15:26:35 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660894000; a=rsa-sha256; cv=none; b=12f2z6+jf6Tx8Wz3ELRg7NB31J8cG0ySD8bdi8bnkrlzOj4u8e7Ex7F+kkhYIbH9g6ZSf7 Z7sdMv0OKwKHqxLftlNklZI9vkzeQeYNgDDrq9BGllNjgMPPjiD4CtYWYN1v9YxAjvgdEM Z/0b3TjFPtqrBFqheDHDitq5auW1w6s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660894000; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2VvBtA1DHNM7MFqwaPxMHVG6nrhW409/aNpJHehQOvw=; b=8TGMWZ+XrGXeS0Gv7UgV1FY302BflxnxJMef54eFsHt0ATjOKCcQ0tgBhFAm93RDNKJjMn VkL8POu2ZQepB3yssjp/+GBIksvKbTcCARUPpSVADTvXIVkzVtaSCxMQL6YZzHTb0OyaA3 HLUgWPOYm7YMOF5jiHY3ftjTpzi5FJw= X-Rspamd-Queue-Id: 8E1D6160013 Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam01 X-Rspam-User: X-Stat-Signature: hrcqtrpf8e7emxjempbwwairdut6x46n X-HE-Tag: 1660893999-607810 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/8/19 11:19, Muchun Song wrote: > > >> On Aug 18, 2022, at 20:58, Miaohe Lin wrote: >> >> On 2022/8/18 17:18, Muchun Song wrote: >>> >>> >>>> On Aug 18, 2022, at 16:54, Yin, Fengwei wrote: >>>> >>>> >>>> >>>> On 8/18/2022 4:40 PM, Muchun Song wrote: >>>>> >>>>> >>>>>> On Aug 18, 2022, at 16:32, Yin, Fengwei wrote: >>>>>> >>>>>> >>>>>> >>>>>> On 8/18/2022 3:59 PM, Muchun Song wrote: >>>>>>> >>>>>>> >>>>>>>> On Aug 18, 2022, at 15:52, Miaohe Lin wrote: >>>>>>>> >>>>>>>> On 2022/8/18 10:47, Muchun Song wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>>> On Aug 18, 2022, at 10:00, Yin, Fengwei wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 8/18/2022 9:55 AM, Miaohe Lin wrote: >>>>>>>>>>>>>> /* >>>>>>>>>>>>>> * The memory barrier inside __SetPageUptodate makes sure that >>>>>>>>>>>>>> * preceding stores to the page contents become visible before >>>>>>>>>>>>>> * the set_pte_at() write. >>>>>>>>>>>>>> */ >>>>>>>>>>>>>> __SetPageUptodate(page); >>>>>>>>>>>>> IIUC, the case here we should make sure others (CPUs) can see new page’s >>>>>>>>>>>>> contents after they have saw PG_uptodate is set. I think commit 0ed361dec369 >>>>>>>>>>>>> can tell us more details. >>>>>>>>>>>>> >>>>>>>>>>>>> I also looked at commit 52f37629fd3c to see why we need a barrier before >>>>>>>>>>>>> set_pte_at(), but I didn’t find any info to explain why. I guess we want >>>>>>>>>>>>> to make sure the order between the page’s contents and subsequent memory >>>>>>>>>>>>> accesses using the corresponding virtual address, do you agree with this? >>>>>>>>>>>> This is my understanding also. Thanks. >>>>>>>>>>> That's also my understanding. Thanks both. >>>>>>>>>> I have an unclear thing (not related with this patch directly): Who is response >>>>>>>>>> for the read barrier in the read side in this case? >>>>>>>>>> >>>>>>>>>> For SetPageUptodate, there are paring write/read memory barrier. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I have the same question. So I think the example proposed by Miaohe is a little >>>>>>>>> difference from the case (hugetlb_vmemmap) here. >>>>>>>> >>>>>>>> Per my understanding, memory barrier in PageUptodate() is needed because user might access the >>>>>>>> page contents using page_address() (corresponding pagetable entry already exists) soon. But for >>>>>>>> the above proposed case, if user wants to access the page contents, the corresponding pagetable >>>>>>>> should be visible first or the page contents can't be accessed. So there should be a data dependency >>>>>>>> acting as memory barrier between pagetable entry is loaded and page contents is accessed. >>>>>>>> Or am I miss something? >>>>>>> >>>>>>> Yep, it is a data dependency. The difference between hugetlb_vmemmap and PageUptodate() is that >>>>>>> the page table (a pointer to the mapped page frame) is loaded by MMU while PageUptodate() is >>>>>>> loaded by CPU. Seems like the data dependency should be inserted between the MMU access and the CPU >>>>>>> access. Maybe it is hardware’s guarantee? >>>>>> I just found the comment in pmd_install() explained why most arch has no read >>>>> >>>>> I think pmd_install() is a little different as well. We should make sure the >>>>> page table walker (like GUP) see the correct PTE entry after they see the pmd >>>>> entry. >>>> >>>> The difference I can see is that pmd/pte thing has both hardware page walker and >>>> software page walker (like GUP) as read side. While the case here only has hardware >>>> page walker as read side. But I suppose the memory barrier requirement still apply >>>> here. >>> >>> I am not against this change. Just in order to make me get a better understanding of >>> hardware behavior. >>> >>>> >>>> Maybe we could do a test: add large delay between reset_struct_page() and set_pte_at? >>> >>> Hi Miaohe, >>> >>> Would you mind doing this test? One thread do vmemmap_restore_pte(), another thread >>> detect if it can see a tail page with PG_head after the previous thread has executed >>> set_pte_at(). >> >> Will it be easier to construct the memory reorder manually like below? >> >> vmemmap_restore_pte() >> ... >> set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); >> /* might a delay. */ >> copy_page(to, (void *)walk->reuse_addr); >> reset_struct_pages(to); > > > Well, you have changed the code ordering. I thought we don’t change the code > ordering. Just let the hardware do reordering. The ideal scenario would be > as follows. > > > CPU0: CPU1: > > vmemmap_restore_pte() > copy_page(to, (void *)walk->reuse_addr); > reset_struct_pages(to); // clear the tail page’s PG_head > set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); > // Detect if it can see a tail page with PG_head. > > I should admit it is a little difficult to construct the scenario. After more > thought, I think here should be inserted a barrier. So: > > Reviewed-by: Muchun Song Many thanks both for review and discussion. :) Thanks, Miaohe Lin