From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EFB2C32772 for ; Tue, 23 Aug 2022 10:02:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AE5EA8D0002; Tue, 23 Aug 2022 06:02:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A6E078D0001; Tue, 23 Aug 2022 06:02:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 910228D0002; Tue, 23 Aug 2022 06:02:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 7BC928D0001 for ; Tue, 23 Aug 2022 06:02:11 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 4C35FC14DC for ; Tue, 23 Aug 2022 10:02:11 +0000 (UTC) X-FDA: 79830416862.10.787F4E2 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by imf06.hostedemail.com (Postfix) with ESMTP id 2584A180046 for ; Tue, 23 Aug 2022 10:02:09 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R471e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VN1UsyU_1661248924; Received: from 30.97.48.53(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VN1UsyU_1661248924) by smtp.aliyun-inc.com; Tue, 23 Aug 2022 18:02:05 +0800 Message-ID: <376d2e0a-d28a-984b-903c-1f6451b04a15@linux.alibaba.com> Date: Tue, 23 Aug 2022 18:02:16 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.12.0 Subject: Re: [PATCH v2 1/5] mm/hugetlb: fix races when looking up a CONT-PTE size hugetlb page To: David Hildenbrand , akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <0e5d92da043d147a867f634b17acbcc97a7f0e64.1661240170.git.baolin.wang@linux.alibaba.com> <4c24b891-04ce-2608-79d2-a75dc236533f@redhat.com> From: Baolin Wang In-Reply-To: <4c24b891-04ce-2608-79d2-a75dc236533f@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661248931; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2OFjYxD2HjkP7OHcevWW8ok4Lhrpq+DvSorqYmNSWNA=; b=qiie4e5HFEPfCq69t/7wf7CVcZt7N+b5RdlgpfKe+NcfGSE9yCPrt7bB4SFFVzlmwbSdjE JFUgCWxxQdF2hkLf66eV/o+ipqtplesQsG9OD184pl+L8Lx2jVghNCfNreW+fZ74kx0HXD mUJ1tWGD5K1WL6BmH1rzGwLcuj/dBww= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf06.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661248931; a=rsa-sha256; cv=none; b=TrWf/w+hzf/hniWKEGlkmzr+mLlpI/2cn0tnp/s19oa8h4BmroQFcSTlp+09WIJfB8x8t5 39Lgz6CE3pb5fp8oRthD4X8LD4GYS/46Ush+oqCrfhZRTDyc58es7R6JPBt7p2elXkiml4 ptL8DJfl7b+z9WrmnoDxBfhrllvWA/M= X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2584A180046 X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf06.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.133 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com X-Stat-Signature: 3s9geof4c57mmpuu11oojbbnzcfti1uf X-HE-Tag: 1661248929-386613 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 8/23/2022 4:29 PM, David Hildenbrand wrote: > On 23.08.22 09:50, Baolin Wang wrote: >> On some architectures (like ARM64), it can support CONT-PTE/PMD size >> hugetlb, which means it can support not only PMD/PUD size hugetlb >> (2M and 1G), but also CONT-PTE/PMD size(64K and 32M) if a 4K page size >> specified. >> >> So when looking up a CONT-PTE size hugetlb page by follow_page(), it >> will use pte_offset_map_lock() to get the pte entry lock for the CONT-PTE >> size hugetlb in follow_page_pte(). However this pte entry lock is incorrect >> for the CONT-PTE size hugetlb, since we should use huge_pte_lock() to >> get the correct lock, which is mm->page_table_lock. >> >> That means the pte entry of the CONT-PTE size hugetlb under current >> pte lock is unstable in follow_page_pte(), we can continue to migrate >> or poison the pte entry of the CONT-PTE size hugetlb, which can cause >> some potential race issues, and following pte_xxx() validation is also >> unstable in follow_page_pte(), even though they are under the 'pte lock'. >> >> Moreover we should use huge_ptep_get() to get the pte entry value of >> the CONT-PTE size hugetlb, which already takes into account the subpages' >> dirty or young bits in case we missed the dirty or young state of the >> CONT-PTE size hugetlb. >> >> To fix above issues, introducing a new helper follow_huge_pte() to look >> up a CONT-PTE size hugetlb page, which uses huge_pte_lock() to get the >> correct pte entry lock to make the pte entry stable, as well as >> supporting non-present pte handling. >> >> Signed-off-by: Baolin Wang >> --- >> include/linux/hugetlb.h | 8 ++++++++ >> mm/gup.c | 11 ++++++++++ >> mm/hugetlb.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++++ >> 3 files changed, 72 insertions(+) >> >> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h >> index 3ec981a..d491138 100644 >> --- a/include/linux/hugetlb.h >> +++ b/include/linux/hugetlb.h >> @@ -207,6 +207,8 @@ struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address, >> struct page *follow_huge_pd(struct vm_area_struct *vma, >> unsigned long address, hugepd_t hpd, >> int flags, int pdshift); >> +struct page *follow_huge_pte(struct vm_area_struct *vma, unsigned long address, >> + pmd_t *pmd, int flags); >> struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address, >> pmd_t *pmd, int flags); >> struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address, >> @@ -312,6 +314,12 @@ static inline struct page *follow_huge_pd(struct vm_area_struct *vma, >> return NULL; >> } >> >> +static inline struct page *follow_huge_pte(struct vm_area_struct *vma, >> + unsigned long address, pmd_t *pmd, int flags) >> +{ >> + return NULL; >> +} >> + >> static inline struct page *follow_huge_pmd(struct mm_struct *mm, >> unsigned long address, pmd_t *pmd, int flags) >> { >> diff --git a/mm/gup.c b/mm/gup.c >> index 3b656b7..87a94f5 100644 >> --- a/mm/gup.c >> +++ b/mm/gup.c >> @@ -534,6 +534,17 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, >> if (unlikely(pmd_bad(*pmd))) >> return no_page_table(vma, flags); >> >> + /* >> + * Considering PTE level hugetlb, like continuous-PTE hugetlb on >> + * ARM64 architecture. >> + */ >> + if (is_vm_hugetlb_page(vma)) { >> + page = follow_huge_pte(vma, address, pmd, flags); >> + if (page) >> + return page; >> + return no_page_table(vma, flags); >> + } >> + >> ptep = pte_offset_map_lock(mm, pmd, address, &ptl); >> pte = *ptep; >> if (!pte_present(pte)) { >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index 6c00ba1..cf742d1 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -6981,6 +6981,59 @@ struct page * __weak >> return NULL; >> } >> >> +/* Support looking up a CONT-PTE size hugetlb page. */ >> +struct page * __weak >> +follow_huge_pte(struct vm_area_struct *vma, unsigned long address, >> + pmd_t *pmd, int flags) >> +{ >> + struct mm_struct *mm = vma->vm_mm; >> + struct hstate *hstate = hstate_vma(vma); >> + unsigned long size = huge_page_size(hstate); >> + struct page *page = NULL; >> + spinlock_t *ptl; >> + pte_t *ptep, pte; >> + >> + /* >> + * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via >> + * follow_hugetlb_page(). >> + */ >> + if (WARN_ON_ONCE(flags & FOLL_PIN)) >> + return NULL; >> + >> + ptep = huge_pte_offset(mm, address, size); >> + if (!ptep) >> + return NULL; >> + >> +retry: >> + ptl = huge_pte_lock(hstate, mm, ptep); >> + pte = huge_ptep_get(ptep); >> + if (pte_present(pte)) { >> + page = pte_page(pte); >> + if (WARN_ON_ONCE(!try_grab_page(page, flags))) { >> + page = NULL; >> + goto out; >> + } >> + } else { >> + if (!(flags & FOLL_MIGRATION)) { >> + page = NULL; >> + goto out; >> + } >> + >> + if (is_hugetlb_entry_migration(pte)) { >> + spin_unlock(ptl); >> + __migration_entry_wait_huge(ptep, ptl); >> + goto retry; >> + } >> + /* >> + * hwpoisoned entry is treated as no_page_table in >> + * follow_page_mask(). >> + */ >> + } >> +out: >> + spin_unlock(ptl); >> + return page; >> +} >> + >> struct page * __weak >> follow_huge_pmd(struct mm_struct *mm, unsigned long address, >> pmd_t *pmd, int flags) > > > Can someone explain why: > * follow_page() goes via follow_page_mask() for hugetlb > * __get_user_pages() goes via follow_hugetlb_page() and never via > follow_page_mask() for hugetlb? > > IOW, why can't we make follow_page_mask() just not handle hugetlb and > route everything via follow_hugetlb_page() -- we primarily only have to > teach it to not trigger faults. IMHO, these follow_huge_xxx() functions are arch-specified at first and were moved into the common hugetlb.c by commit 9e5fc74c3025 ("mm: hugetlb: Copy general hugetlb code from x86 to mm"), and now there are still some arch-specified follow_huge_xxx() definition, for example: ia64: follow_huge_addr powerpc: follow_huge_pd s390: follow_huge_pud What I mean is that follow_hugetlb_page() is a common and not-arch-specified function, is it suitable to change it to be arch-specified? And thinking more, can we rename follow_hugetlb_page() as hugetlb_page_faultin() and simplify it to only handle the page faults of hugetlb like the faultin_page() for normal page? That means we can make sure only follow_page_mask() can handle hugetlb. Mike, Muchun, please correct me if I missed something. Thanks. > What's the reason that this hugetlb code has to be overly complicated?