From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B245C4167B for ; Mon, 7 Dec 2020 11:35:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AE32C23407 for ; Mon, 7 Dec 2020 11:35:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AE32C23407 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 368E28D001A; Mon, 7 Dec 2020 06:35:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3409F8D0001; Mon, 7 Dec 2020 06:35:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 255688D001A; Mon, 7 Dec 2020 06:35:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0207.hostedemail.com [216.40.44.207]) by kanga.kvack.org (Postfix) with ESMTP id 101008D0001 for ; Mon, 7 Dec 2020 06:35:14 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D2914180AD815 for ; Mon, 7 Dec 2020 11:35:13 +0000 (UTC) X-FDA: 77566280106.30.bait15_2413136273de Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id AF028180B3C83 for ; Mon, 7 Dec 2020 11:35:13 +0000 (UTC) X-HE-Tag: bait15_2413136273de X-Filterd-Recvd-Size: 11005 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 11:35:13 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id p21so6863197pjv.0 for ; Mon, 07 Dec 2020 03:35:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=R1pOMiSZIT4zzWwCC3RI8cqklW1Nb035cekwxKtaaVc=; b=tdedZG4TA6YIeSkLXKLwKJSIDRJsgaWD49qSThY/5qRbEOGMhswjKF+y+3tE//Q9GX Z46nH7F5RztgV7IpQF+nvQh5fP0PRvVuDbCtmL+DqLUnzwsCtwEWrcKYUUE4IOslWky6 zQLJvUhDs403MDnqw27lWTBP71HJqyW9dk+grHTF+6OPgJ452jBXTUZi8b4Iy1LCRY0a B7KsP2/DYqTQU9tErE4rG6jV75Gvt9+RD+EOGMwwsh3JBsAoxFJxie7XD44CJODdyBMH ZZjkx5xM1o3RMpaknmRKIUEsPRJuMTFE4JxUIeBNSdaI67DN+EkZESbAsjtXaH7s5v8J dDgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=R1pOMiSZIT4zzWwCC3RI8cqklW1Nb035cekwxKtaaVc=; b=W+DuLzeZigDJN0nq8lpmipDFPLSZ3r7RkjlepvklzgJky/NUo2fCC0yG0RsBVBwcKa lGOMr6e/NoWQHM92NM/a9B9JkY/D5um1OXmotyk12KNPbcjieusGtkbbORWu21TlJ0Sl Nz17cX6K4V4GyG3MuC7XPJFsh16nW4ClNQfPn+MMGyMTzSGGRnS3QaBydBNRRpHMP3wO 9fBmjRsX4bLrxqLXRrB9NA9xuzy4kXGteIAmCldWW9F7X0ye5hFbr3sEIjLBBai7Q3ZL SHh4NW8Zaog9L+/Nru4Y6ReoiPg3GXFz5W3xuNXEsEsry2dP68GuflkDw/dVmhi/F1ZD cZgw== X-Gm-Message-State: AOAM530I3O8qYbFwRlckb1ilWHwcIk4pUACCd0xv64hYZ9thqkNrViy0 N/aPLB2tKX3YUfM0AkVyw/RMM2bmo94= X-Google-Smtp-Source: ABdhPJwOADOp4BHe+nOTWmOnCV7Krmz1UonOWASME9633gE5AHOao5YhBEMB9G6gr5genfai3+Mg9g== X-Received: by 2002:a17:90a:9e5:: with SMTP id 92mr16288519pjo.176.1607340911988; Mon, 07 Dec 2020 03:35:11 -0800 (PST) Received: from localhost.localdomain ([203.205.141.39]) by smtp.gmail.com with ESMTPSA id d4sm14219822pfo.127.2020.12.07.03.35.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 07 Dec 2020 03:35:11 -0800 (PST) From: yulei.kernel@gmail.com X-Google-Original-From: yuleixzhang@tencent.com To: linux-mm@kvack.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, naoya.horiguchi@nec.com, viro@zeniv.linux.org.uk, pbonzini@redhat.com Cc: joao.m.martins@oracle.com, rdunlap@infradead.org, sean.j.christopherson@intel.com, xiaoguangrong.eric@gmail.com, kernellwp@gmail.com, lihaiwei.kernel@gmail.com, Yulei Zhang , Chen Zhuo Subject: [RFC V2 27/37] mm: add pud_special() check to support dmem huge pud Date: Mon, 7 Dec 2020 19:31:20 +0800 Message-Id: X-Mailer: git-send-email 2.28.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yulei Zhang Add pud_special() and follow_special_pud() to support dmem huge pud as we do for dmem huge pmd. Signed-off-by: Chen Zhuo Signed-off-by: Yulei Zhang --- arch/x86/include/asm/pgtable.h | 2 +- include/linux/huge_mm.h | 2 +- mm/gup.c | 46 ++++++++++++++++++++++++++++++++++++= ++++++ mm/huge_memory.c | 11 ++++++---- mm/memory.c | 4 ++-- mm/mprotect.c | 2 ++ mm/pagewalk.c | 2 +- 7 files changed, 60 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtabl= e.h index 9e36d42..2284387 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -265,7 +265,7 @@ static inline int pmd_trans_huge(pmd_t pmd) #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static inline int pud_trans_huge(pud_t pud) { - return (pud_val(pud) & (_PAGE_PSE|_PAGE_DEVMAP)) =3D=3D _PAGE_PSE; + return (pud_val(pud) & (_PAGE_PSE|_PAGE_DEVMAP|_PAGE_DMEM)) =3D=3D _PAG= E_PSE; } #endif =20 diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2514b90..b69c940 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -251,7 +251,7 @@ static inline spinlock_t *pmd_trans_huge_lock(pmd_t *= pmd, static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma) { - if (pud_trans_huge(*pud) || pud_devmap(*pud)) + if (pud_trans_huge(*pud) || pud_devmap(*pud) || pud_special(*pud)) return __pud_trans_huge_lock(pud, vma); else return NULL; diff --git a/mm/gup.c b/mm/gup.c index 0ea9071..8eb85ba 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -423,6 +423,42 @@ static int follow_pfn_pte(struct vm_area_struct *vma= , unsigned long address, return ERR_PTR(-EEXIST); } =20 +static struct page * +follow_special_pud(struct vm_area_struct *vma, unsigned long address, + pud_t *pud, unsigned int flags) +{ + spinlock_t *ptl; + + if ((flags & FOLL_DUMP) && is_huge_zero_pud(*pud)) + /* Avoid special (like zero) pages in core dumps */ + return ERR_PTR(-EFAULT); + + /* No page to get reference */ + if (flags & FOLL_GET) + return ERR_PTR(-EFAULT); + + if (flags & FOLL_TOUCH) { + pud_t _pud; + + ptl =3D pud_lock(vma->vm_mm, pud); + if (!pud_special(*pud)) { + spin_unlock(ptl); + return NULL; + } + _pud =3D pud_mkyoung(*pud); + if (flags & FOLL_WRITE) + _pud =3D pud_mkdirty(_pud); + if (pudp_set_access_flags(vma, address & HPAGE_PMD_MASK, + pud, _pud, + flags & FOLL_WRITE)) + update_mmu_cache_pud(vma, address, pud); + spin_unlock(ptl); + } + + /* Proper page table entry exists, but no corresponding struct page */ + return ERR_PTR(-EEXIST); +} + /* * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. @@ -726,6 +762,12 @@ static struct page *follow_pud_mask(struct vm_area_s= truct *vma, return page; return no_page_table(vma, flags); } + if (pud_special(*pud)) { + page =3D follow_special_pud(vma, address, pud, flags); + if (page) + return page; + return no_page_table(vma, flags); + } if (is_hugepd(__hugepd(pud_val(*pud)))) { page =3D follow_huge_pd(vma, address, __hugepd(pud_val(*pud)), flags, @@ -2511,6 +2553,10 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, u= nsigned long addr, if (!pud_access_permitted(orig, flags & FOLL_WRITE)) return 0; =20 + /* Bypass dmem pud. It will be handled in outside routine. */ + if (pud_special(orig)) + return 0; + if (pud_devmap(orig)) { if (unlikely(flags & FOLL_LONGTERM)) return 0; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6e52d57..7c5385a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -883,6 +883,8 @@ static void insert_pfn_pud(struct vm_area_struct *vma= , unsigned long addr, entry =3D pud_mkhuge(pfn_t_pud(pfn, prot)); if (pfn_t_devmap(pfn)) entry =3D pud_mkdevmap(entry); + if (pfn_t_dmem(pfn)) + entry =3D pud_mkdmem(entry); if (write) { entry =3D pud_mkyoung(pud_mkdirty(entry)); entry =3D maybe_pud_mkwrite(entry, vma); @@ -919,7 +921,7 @@ vm_fault_t vmf_insert_pfn_pud_prot(struct vm_fault *v= mf, pfn_t pfn, * can't support a 'special' bit. */ BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) && - !pfn_t_devmap(pfn)); + !pfn_t_devmap(pfn) && !pfn_t_dmem(pfn)); BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) =3D=3D (VM_PFNMAP|VM_MIXEDMAP)); BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); @@ -1911,7 +1913,7 @@ spinlock_t *__pud_trans_huge_lock(pud_t *pud, struc= t vm_area_struct *vma) spinlock_t *ptl; =20 ptl =3D pud_lock(vma->vm_mm, pud); - if (likely(pud_trans_huge(*pud) || pud_devmap(*pud))) + if (likely(pud_trans_huge(*pud) || pud_devmap(*pud) || pud_special(*pud= ))) return ptl; spin_unlock(ptl); return NULL; @@ -1922,6 +1924,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_= area_struct *vma, pud_t *pud, unsigned long addr) { spinlock_t *ptl; + pud_t orig_pud; =20 ptl =3D __pud_trans_huge_lock(pud, vma); if (!ptl) @@ -1932,9 +1935,9 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_= area_struct *vma, * pgtable_trans_huge_withdraw after finishing pudp related * operations. */ - pudp_huge_get_and_clear_full(tlb->mm, addr, pud, tlb->fullmm); + orig_pud =3D pudp_huge_get_and_clear_full(tlb->mm, addr, pud, tlb->full= mm); tlb_remove_pud_tlb_entry(tlb, pud, addr); - if (vma_is_special_huge(vma)) { + if (vma_is_special_huge(vma) || pud_special(orig_pud)) { spin_unlock(ptl); /* No zero page support yet */ } else { diff --git a/mm/memory.c b/mm/memory.c index abb9148..01f3b05 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1078,7 +1078,7 @@ struct page *vm_normal_page_pmd(struct vm_area_stru= ct *vma, unsigned long addr, src_pud =3D pud_offset(src_p4d, addr); do { next =3D pud_addr_end(addr, end); - if (pud_trans_huge(*src_pud) || pud_devmap(*src_pud)) { + if (pud_trans_huge(*src_pud) || pud_devmap(*src_pud) || pud_special(*s= rc_pud)) { int err; =20 VM_BUG_ON_VMA(next-addr !=3D HPAGE_PUD_SIZE, src_vma); @@ -1375,7 +1375,7 @@ static inline unsigned long zap_pud_range(struct mm= u_gather *tlb, pud =3D pud_offset(p4d, addr); do { next =3D pud_addr_end(addr, end); - if (pud_trans_huge(*pud) || pud_devmap(*pud)) { + if (pud_trans_huge(*pud) || pud_devmap(*pud) || pud_special(*pud)) { if (next - addr !=3D HPAGE_PUD_SIZE) { mmap_assert_locked(tlb->mm); split_huge_pud(vma, pud, addr); diff --git a/mm/mprotect.c b/mm/mprotect.c index b1650b5..05fa453 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -292,6 +292,8 @@ static inline unsigned long change_pud_range(struct v= m_area_struct *vma, pud =3D pud_offset(p4d, addr); do { next =3D pud_addr_end(addr, end); + if (pud_special(*pud)) + continue; if (pud_none_or_clear_bad(pud)) continue; pages +=3D change_pmd_range(vma, pud, addr, next, newprot, diff --git a/mm/pagewalk.c b/mm/pagewalk.c index e7c4575..afd8bca 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -129,7 +129,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long a= ddr, unsigned long end, do { again: next =3D pud_addr_end(addr, end); - if (pud_none(*pud) || (!walk->vma && !walk->no_vma)) { + if (pud_none(*pud) || (!walk->vma && !walk->no_vma) || pud_special(*pu= d)) { if (ops->pte_hole) err =3D ops->pte_hole(addr, next, depth, walk); if (err) --=20 1.8.3.1