From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1E0FC433DB for ; Fri, 15 Jan 2021 19:06:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AF35323A6C for ; Fri, 15 Jan 2021 19:06:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387576AbhAOTGd (ORCPT ); Fri, 15 Jan 2021 14:06:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726751AbhAOTG3 (ORCPT ); Fri, 15 Jan 2021 14:06:29 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3FD7C061795 for ; Fri, 15 Jan 2021 11:05:11 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id p19so6151477plr.22 for ; Fri, 15 Jan 2021 11:05:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=kKZn5EyK0lQoLlnKRIFyY3LHCOTCOwIOG+DqX75cVYE=; b=QJL0aQTRF6WjNZuTt0mjMBY74hIjWKebdDFkV+WmR96bfMmQhdzy6Ievi+D1ScQamO psY9xVq3kEOtbQj3AOrbf6zPjwcBO8HNywraylE7HrrAM921aa/9eFdExHyKGJFf75Rs ejeXP4E5dtnAlbmaIv7bQYxY1PmCSMMk4YBs/IF4rOIUEKz2WwhgjJmUmnLfmSPbMyT/ cwhoQtFdtg4xazhM3/Tt9SaRgiW7xMDZLKik3nDLO0BzMbbAc5dNwcVvLFYoVkFd2+hj hobws0TW0rJqrv2qHUQ0FM9cxeDmad+V93uCdaBWGNqFZxPZpuSsgaQKaH1xOnsUu8Yx XmbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kKZn5EyK0lQoLlnKRIFyY3LHCOTCOwIOG+DqX75cVYE=; b=L8vNDwfUpXOSP0QByLJbJtJxAnRZV9JtYUjtpYqGzByZ+0VejkwO0IvyoX1ECf9/id 9e7k8EmtAVzFXkCiOvSCtvoPSDHG1UXW/SaQW3k8ss2GsbFhTJPIuBvkMiKkxeWKZzvJ yyPhJsOR+vej9GhwuIVRDelz/qsp+FMeRdR0OSdSmOXnXk1zBqBeR7wYYowgZUxXSCLZ lZgwkX1bAZQ8Wu8fCsdpjHUhkJhk592YytSEy+a+xbHy3o+fd0Uf8R944uEXidKEN7hX P94+bfmE4jdVcLeZIg6vXdVtmGsXl2iERa8E+8G1LSr58Teu8PEKHV8cjj1yWTFfZrGO hKUg== X-Gm-Message-State: AOAM532JU9RAAoOFjGtmF33McEMogCKtnsUYtRUAN5xumg6/+rfmdRXQ jQ7uXUWM756hD/S37ymKBXKiwWachzM4wF/QeD/4 X-Google-Smtp-Source: ABdhPJyKVB/78lGsSqY8XilYt+Uja9F1K5zt3dfdTXKjEFRdpvqLA6pH6/W3MRpFKW0veksNz33GVXypGZHMSr1OmAdn Sender: "axelrasmussen via sendgmr" X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:f693:9fff:feef:c8f8]) (user=axelrasmussen job=sendgmr) by 2002:a17:902:bb95:b029:dc:e7b:fd6e with SMTP id m21-20020a170902bb95b02900dc0e7bfd6emr14121161pls.12.1610737511070; Fri, 15 Jan 2021 11:05:11 -0800 (PST) Date: Fri, 15 Jan 2021 11:04:44 -0800 In-Reply-To: <20210115190451.3135416-1-axelrasmussen@google.com> Message-Id: <20210115190451.3135416-3-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210115190451.3135416-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [PATCH 2/9] hugetlb/userfaultfd: Forbid huge pmd sharing when uffd enabled From: Axel Rasmussen To: Alexander Viro , Alexey Dobriyan , Andrea Arcangeli , Andrew Morton , Anshuman Khandual , Catalin Marinas , Chinwen Chang , Huang Ying , Ingo Molnar , Jann Horn , Jerome Glisse , Lokesh Gidra , "Matthew Wilcox (Oracle)" , Michael Ellerman , "=?UTF-8?q?Michal=20Koutn=C3=BD?=" , Michel Lespinasse , Mike Kravetz , Mike Rapoport , Nicholas Piggin , Peter Xu , Shaohua Li , Shawn Anastasio , Steven Rostedt , Steven Price , Vlastimil Babka Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Adam Ruprecht , Axel Rasmussen , Cannon Matthews , "Dr . David Alan Gilbert" , David Rientjes , Oliver Upton Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Peter Xu Huge pmd sharing could bring problem to userfaultfd. The thing is that userfaultfd is running its logic based on the special bits on page table entries, however the huge pmd sharing could potentially share page table entries for different address ranges. That could cause issues on either: - When sharing huge pmd page tables for an uffd write protected range, the newly mapped huge pmd range will also be write protected unexpectedly, or, - When we try to write protect a range of huge pmd shared range, we'll first do huge_pmd_unshare() in hugetlb_change_protection(), however that also means the UFFDIO_WRITEPROTECT could be silently skipped for the shared region, which could lead to data loss. Since at it, a few other things are done altogether: - Move want_pmd_share() from mm/hugetlb.c into linux/hugetlb.h, because that's definitely something that arch code would like to use too - ARM64 currently directly check against CONFIG_ARCH_WANT_HUGE_PMD_SHARE when trying to share huge pmd. Switch to the want_pmd_share() helper. Signed-off-by: Peter Xu Signed-off-by: Axel Rasmussen --- arch/arm64/mm/hugetlbpage.c | 3 +-- include/linux/hugetlb.h | 12 ++++++++++++ include/linux/userfaultfd_k.h | 9 +++++++++ mm/hugetlb.c | 5 ++--- 4 files changed, 24 insertions(+), 5 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 5b32ec888698..1a8ce0facfe8 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -284,8 +284,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, */ ptep = pte_alloc_map(mm, pmdp, addr); } else if (sz == PMD_SIZE) { - if (IS_ENABLED(CONFIG_ARCH_WANT_HUGE_PMD_SHARE) && - pud_none(READ_ONCE(*pudp))) + if (want_pmd_share(vma) && pud_none(READ_ONCE(*pudp))) ptep = huge_pmd_share(mm, addr, pudp); else ptep = (pte_t *)pmd_alloc(mm, pudp, addr); diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 1e0abb609976..4959e94e78b1 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -11,6 +11,7 @@ #include #include #include +#include struct ctl_table; struct user_struct; @@ -947,4 +948,15 @@ static inline __init void hugetlb_cma_check(void) } #endif +static inline bool want_pmd_share(struct vm_area_struct *vma) +{ +#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE + if (uffd_disable_huge_pmd_share(vma)) + return false; + return true; +#else + return false; +#endif +} + #endif /* _LINUX_HUGETLB_H */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index a8e5f3ea9bb2..c63ccdae3eab 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -52,6 +52,15 @@ static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma, return vma->vm_userfaultfd_ctx.ctx == vm_ctx.ctx; } +/* + * Never enable huge pmd sharing on uffd-wp registered vmas, because uffd-wp + * protect information is per pgtable entry. + */ +static inline bool uffd_disable_huge_pmd_share(struct vm_area_struct *vma) +{ + return vma->vm_flags & VM_UFFD_WP; +} + static inline bool userfaultfd_missing(struct vm_area_struct *vma) { return vma->vm_flags & VM_UFFD_MISSING; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 16a8d5ac68c0..1ad91d94cbe2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5371,7 +5371,7 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE; return 1; } -#define want_pmd_share() (1) + #else /* !CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) { @@ -5388,7 +5388,6 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, unsigned long *start, unsigned long *end) { } -#define want_pmd_share() (0) #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ #ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB @@ -5410,7 +5409,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, pte = (pte_t *)pud; } else { BUG_ON(sz != PMD_SIZE); - if (want_pmd_share() && pud_none(*pud)) + if (want_pmd_share(vma) && pud_none(*pud)) pte = huge_pmd_share(mm, addr, pud); else pte = (pte_t *)pmd_alloc(mm, pud, addr); -- 2.30.0.284.gd98b1dd5eaa7-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0B2EC433DB for ; Fri, 15 Jan 2021 19:05:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3256B221E2 for ; Fri, 15 Jan 2021 19:05:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3256B221E2 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B01EF8D01DC; Fri, 15 Jan 2021 14:05:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AB2518D01D9; Fri, 15 Jan 2021 14:05:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97D358D01DC; Fri, 15 Jan 2021 14:05:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id 782308D01D9 for ; Fri, 15 Jan 2021 14:05:13 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 454EF1C96A for ; Fri, 15 Jan 2021 19:05:13 +0000 (UTC) X-FDA: 77708937306.29.list07_10035e727531 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 21A131844C515 for ; Fri, 15 Jan 2021 19:05:13 +0000 (UTC) X-HE-Tag: list07_10035e727531 X-Filterd-Recvd-Size: 8755 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Fri, 15 Jan 2021 19:05:12 +0000 (UTC) Received: by mail-pl1-f202.google.com with SMTP id a21so4978647pls.0 for ; Fri, 15 Jan 2021 11:05:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=kKZn5EyK0lQoLlnKRIFyY3LHCOTCOwIOG+DqX75cVYE=; b=QJL0aQTRF6WjNZuTt0mjMBY74hIjWKebdDFkV+WmR96bfMmQhdzy6Ievi+D1ScQamO psY9xVq3kEOtbQj3AOrbf6zPjwcBO8HNywraylE7HrrAM921aa/9eFdExHyKGJFf75Rs ejeXP4E5dtnAlbmaIv7bQYxY1PmCSMMk4YBs/IF4rOIUEKz2WwhgjJmUmnLfmSPbMyT/ cwhoQtFdtg4xazhM3/Tt9SaRgiW7xMDZLKik3nDLO0BzMbbAc5dNwcVvLFYoVkFd2+hj hobws0TW0rJqrv2qHUQ0FM9cxeDmad+V93uCdaBWGNqFZxPZpuSsgaQKaH1xOnsUu8Yx XmbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kKZn5EyK0lQoLlnKRIFyY3LHCOTCOwIOG+DqX75cVYE=; b=VFs6OV+kYE06wKRMfN93C61PLSO3Fqzu/Yhr0kmkYdvfnFz/4FlhHHxk3fn7EubNN7 OPHQ8FPlIBDAdcYmYr1XizI1U5dJa2lsYuP64bAuHSEmOLhtfDdThSVPfRHszLqCtqY8 edx52usSe2KcDz5MgIxG8wIXWGYG4Y11AmZ4jYLAVBFZBFxjTwox92iqg88aVZL0Rm3W n9XAVTCsuqnGpoG8uX8wVPYVIz/7f+EbGVVMJWcUwmVohElbykqoucyWcXuJdGP5LwEk BS6ORkOBZsi2k0xsZtHsd1fxUHLA1jiT6y3oPFXwm6YRSmOVItXPk/j6Bh0I5TVP3Uqa caQw== X-Gm-Message-State: AOAM533xNmVtWFWNOs4SJjxA8hMgfZLwc0xYkL3flR50p8S5Q4NC9tds EzTvRCmiaO/JF0pIydTuVEbuZr2nA2gZzgL8uRKR X-Google-Smtp-Source: ABdhPJyKVB/78lGsSqY8XilYt+Uja9F1K5zt3dfdTXKjEFRdpvqLA6pH6/W3MRpFKW0veksNz33GVXypGZHMSr1OmAdn X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:f693:9fff:feef:c8f8]) (user=axelrasmussen job=sendgmr) by 2002:a17:902:bb95:b029:dc:e7b:fd6e with SMTP id m21-20020a170902bb95b02900dc0e7bfd6emr14121161pls.12.1610737511070; Fri, 15 Jan 2021 11:05:11 -0800 (PST) Date: Fri, 15 Jan 2021 11:04:44 -0800 In-Reply-To: <20210115190451.3135416-1-axelrasmussen@google.com> Message-Id: <20210115190451.3135416-3-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210115190451.3135416-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog Subject: [PATCH 2/9] hugetlb/userfaultfd: Forbid huge pmd sharing when uffd enabled From: Axel Rasmussen To: Alexander Viro , Alexey Dobriyan , Andrea Arcangeli , Andrew Morton , Anshuman Khandual , Catalin Marinas , Chinwen Chang , Huang Ying , Ingo Molnar , Jann Horn , Jerome Glisse , Lokesh Gidra , "Matthew Wilcox (Oracle)" , Michael Ellerman , "=?UTF-8?q?Michal=20Koutn=C3=BD?=" , Michel Lespinasse , Mike Kravetz , Mike Rapoport , Nicholas Piggin , Peter Xu , Shaohua Li , Shawn Anastasio , Steven Rostedt , Steven Price , Vlastimil Babka Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Adam Ruprecht , Axel Rasmussen , Cannon Matthews , "Dr . David Alan Gilbert" , David Rientjes , Oliver Upton Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Xu Huge pmd sharing could bring problem to userfaultfd. The thing is that userfaultfd is running its logic based on the special bits on page table entries, however the huge pmd sharing could potentially share page table entries for different address ranges. That could cause issues on either: - When sharing huge pmd page tables for an uffd write protected range, the newly mapped huge pmd range will also be write protected unexpectedly, or, - When we try to write protect a range of huge pmd shared range, we'll first do huge_pmd_unshare() in hugetlb_change_protection(), however that also means the UFFDIO_WRITEPROTECT could be silently skipped for the shared region, which could lead to data loss. Since at it, a few other things are done altogether: - Move want_pmd_share() from mm/hugetlb.c into linux/hugetlb.h, because that's definitely something that arch code would like to use too - ARM64 currently directly check against CONFIG_ARCH_WANT_HUGE_PMD_SHARE when trying to share huge pmd. Switch to the want_pmd_share() helper. Signed-off-by: Peter Xu Signed-off-by: Axel Rasmussen --- arch/arm64/mm/hugetlbpage.c | 3 +-- include/linux/hugetlb.h | 12 ++++++++++++ include/linux/userfaultfd_k.h | 9 +++++++++ mm/hugetlb.c | 5 ++--- 4 files changed, 24 insertions(+), 5 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 5b32ec888698..1a8ce0facfe8 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -284,8 +284,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, */ ptep = pte_alloc_map(mm, pmdp, addr); } else if (sz == PMD_SIZE) { - if (IS_ENABLED(CONFIG_ARCH_WANT_HUGE_PMD_SHARE) && - pud_none(READ_ONCE(*pudp))) + if (want_pmd_share(vma) && pud_none(READ_ONCE(*pudp))) ptep = huge_pmd_share(mm, addr, pudp); else ptep = (pte_t *)pmd_alloc(mm, pudp, addr); diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 1e0abb609976..4959e94e78b1 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -11,6 +11,7 @@ #include #include #include +#include struct ctl_table; struct user_struct; @@ -947,4 +948,15 @@ static inline __init void hugetlb_cma_check(void) } #endif +static inline bool want_pmd_share(struct vm_area_struct *vma) +{ +#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE + if (uffd_disable_huge_pmd_share(vma)) + return false; + return true; +#else + return false; +#endif +} + #endif /* _LINUX_HUGETLB_H */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index a8e5f3ea9bb2..c63ccdae3eab 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -52,6 +52,15 @@ static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma, return vma->vm_userfaultfd_ctx.ctx == vm_ctx.ctx; } +/* + * Never enable huge pmd sharing on uffd-wp registered vmas, because uffd-wp + * protect information is per pgtable entry. + */ +static inline bool uffd_disable_huge_pmd_share(struct vm_area_struct *vma) +{ + return vma->vm_flags & VM_UFFD_WP; +} + static inline bool userfaultfd_missing(struct vm_area_struct *vma) { return vma->vm_flags & VM_UFFD_MISSING; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 16a8d5ac68c0..1ad91d94cbe2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5371,7 +5371,7 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE; return 1; } -#define want_pmd_share() (1) + #else /* !CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) { @@ -5388,7 +5388,6 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, unsigned long *start, unsigned long *end) { } -#define want_pmd_share() (0) #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ #ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB @@ -5410,7 +5409,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, pte = (pte_t *)pud; } else { BUG_ON(sz != PMD_SIZE); - if (want_pmd_share() && pud_none(*pud)) + if (want_pmd_share(vma) && pud_none(*pud)) pte = huge_pmd_share(mm, addr, pud); else pte = (pte_t *)pmd_alloc(mm, pud, addr); -- 2.30.0.284.gd98b1dd5eaa7-goog