From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AEF8C47097 for ; Thu, 3 Jun 2021 21:27:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 52340613D8 for ; Thu, 3 Jun 2021 21:27:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230145AbhFCV3K (ORCPT ); Thu, 3 Jun 2021 17:29:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229620AbhFCV3J (ORCPT ); Thu, 3 Jun 2021 17:29:09 -0400 Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [IPv6:2a00:1450:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43FFDC06174A for ; Thu, 3 Jun 2021 14:27:10 -0700 (PDT) Received: by mail-ej1-x635.google.com with SMTP id b9so11283637ejc.13 for ; Thu, 03 Jun 2021 14:27:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=wWF0LmSUjm4yC2lK4bG3Fr4kMBCczV/fZvv9HdaYnZ4=; b=PrJbwPsGj1DMvzYWsww4J8+y80Fal2f70tt0bje+OTzNRrw2K/4++7uol694AwFzTO dXgGhJOaKjsWKab118zE4BwIu7/HlNCvtaJc25BBP+RhjUT3y93jkcERUNiZXLN4hgmK G+kTT9XwIm+4fgfW0r7hxN9xTmkXLcAjCAxg5SzaF91StpyTkYwVDa1ib3cRG3hwLblr 2IoF2naBKZHmhH/YcWgboy8vSCF4g4rIN02gWQHSe9aRXB3XZ+xhJdLdtTqt1cWfULvT deImbIwq8t/suqyXkLEs7JwSMX4QsW/+eqLMgb3ZXRhXktaMbs3PU6TXcU18pQ0w1AxI cDjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=wWF0LmSUjm4yC2lK4bG3Fr4kMBCczV/fZvv9HdaYnZ4=; b=j/WryTEPBOxWPAehSQd1GgO/3qMhSt0wacV003md0CAEs3zFpElxXBe0RDwDYXFPhJ IKWpby6wTuoOzvd8A8gHHOpDjbqoNzRtIlVDk+/8jArC847ReCu/AnwTy7fo3IgFRHus DaOFCMQT0ukikbG6rln8Mh+BCcCxrLkjEN59yaSoU3vmObVMExeM0p2PPNmbnYzh0pnw OUTQyCCmZBjWAa/VRq3AIOEFmQzPAEFxnnmAI4e0B0xrMsdjiUAZ1u8pdvOB/PnukUs7 yVqV3ynsdEkEER9j2oihvpm3OtJTDbGT6nG7pdwwxrQw3IWo8c6Di4ZX10phJexczvWb PN6g== X-Gm-Message-State: AOAM532FIUyKZCahpXcFK4VQ3qqDQ0Rt7/G8n6ixcQGerObuDs41bGLG 05UfTmLlbblZzpW+x4w00+d8P224B7eEQlaWUKg= X-Google-Smtp-Source: ABdhPJyXeGiJtG4uW7GXIXAQabp0Quew9wW7zcB7/rCiirnP+e15B4xm56R4wQs9EyyEodENRKvHJ3t16/6nQBXB4Ic= X-Received: by 2002:a17:906:af95:: with SMTP id mj21mr1157992ejb.25.1622755628871; Thu, 03 Jun 2021 14:27:08 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Yang Shi Date: Thu, 3 Jun 2021 14:26:57 -0700 Message-ID: Subject: Re: [PATCH 1/7] mm/thp: fix __split_huge_pmd_locked() on shmem migration entry To: Hugh Dickins Cc: Andrew Morton , "Kirill A. Shutemov" , Wang Yugui , Matthew Wilcox , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , Linux MM , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 1, 2021 at 2:05 PM Hugh Dickins wrote: > > Stressing huge tmpfs page migration racing hole punch often crashed on the > VM_BUG_ON(!pmd_present) in pmdp_huge_clear_flush(), with DEBUG_VM=y kernel; > or shortly afterwards, on a bad dereference in __split_huge_pmd_locked() > when DEBUG_VM=n. They forgot to allow for pmd migration entries in the > non-anonymous case. > > Full disclosure: those particular experiments were on a kernel with more > relaxed mmap_lock and i_mmap_rwsem locking, and were not repeated on the > vanilla kernel: it is conceivable that stricter locking happens to avoid > those cases, or makes them less likely; but __split_huge_pmd_locked() > already allowed for pmd migration entries when handling anonymous THPs, > so this commit brings the shmem and file THP handling into line. > > Are there more places that need to be careful about pmd migration entries? > None hit in practice, but several of those is_huge_zero_pmd() tests were > done without checking pmd_present() first: I believe a pmd migration entry > could end up satisfying that test. Ah, the inversion of swap offset, to > protect against L1TF, makes that impossible on x86; but other arches need > the pmd_present() check, and even x86 ought not to apply pmd_page() to a > swap-like pmd. Fix those instances; __split_huge_pmd_locked() was not > wrong to be checking with pmd_trans_huge() instead, but I think it's > clearer to use pmd_present() in each instance. > > And while there: make it clearer to the eye that the !vma_is_anonymous() > and is_huge_zero_pmd() blocks make early returns (and don't return void). > > Fixes: e71769ae5260 ("mm: enable thp migration for shmem thp") > Signed-off-by: Hugh Dickins > Cc: > --- > mm/huge_memory.c | 38 ++++++++++++++++++++++++-------------- > mm/pgtable-generic.c | 5 ++--- > 2 files changed, 26 insertions(+), 17 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 63ed6b25deaa..9fb7b47da87e 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1587,9 +1587,6 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > goto out_unlocked; > > orig_pmd = *pmd; > - if (is_huge_zero_pmd(orig_pmd)) > - goto out; > - > if (unlikely(!pmd_present(orig_pmd))) { > VM_BUG_ON(thp_migration_supported() && > !is_pmd_migration_entry(orig_pmd)); > @@ -1597,6 +1594,9 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > } > > page = pmd_page(orig_pmd); > + if (is_huge_zero_page(page)) > + goto out; > + > /* > * If other processes are mapping this page, we couldn't discard > * the page unless they all do MADV_FREE so let's skip the page. > @@ -1676,7 +1676,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > spin_unlock(ptl); > if (is_huge_zero_pmd(orig_pmd)) > tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); > - } else if (is_huge_zero_pmd(orig_pmd)) { > + } else if (pmd_present(orig_pmd) && is_huge_zero_pmd(orig_pmd)) { If it is a huge zero migration entry, the code would fallback to the "else". But IIUC the "else" case doesn't handle the huge zero page correctly. It may mess up the rss counter. > zap_deposited_table(tlb->mm, pmd); > spin_unlock(ptl); > tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); > @@ -2044,7 +2044,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > count_vm_event(THP_SPLIT_PMD); > > if (!vma_is_anonymous(vma)) { > - _pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd); > + old_pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd); > /* > * We are going to unmap this huge page. So > * just go ahead and zap it > @@ -2053,16 +2053,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > zap_deposited_table(mm, pmd); > if (vma_is_special_huge(vma)) > return; > - page = pmd_page(_pmd); > - if (!PageDirty(page) && pmd_dirty(_pmd)) > - set_page_dirty(page); > - if (!PageReferenced(page) && pmd_young(_pmd)) > - SetPageReferenced(page); > - page_remove_rmap(page, true); > - put_page(page); > + if (unlikely(is_pmd_migration_entry(old_pmd))) { > + swp_entry_t entry; > + > + entry = pmd_to_swp_entry(old_pmd); > + page = migration_entry_to_page(entry); > + } else { > + page = pmd_page(old_pmd); > + if (!PageDirty(page) && pmd_dirty(old_pmd)) > + set_page_dirty(page); > + if (!PageReferenced(page) && pmd_young(old_pmd)) > + SetPageReferenced(page); > + page_remove_rmap(page, true); > + put_page(page); > + } > add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR); > return; > - } else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) { > + } > + > + if (pmd_present(*pmd) && is_huge_zero_pmd(*pmd)) { > /* > * FIXME: Do we want to invalidate secondary mmu by calling > * mmu_notifier_invalidate_range() see comments below inside > @@ -2072,7 +2081,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > * small page also write protected so it does not seems useful > * to invalidate secondary mmu at this time. > */ > - return __split_huge_zero_page_pmd(vma, haddr, pmd); > + __split_huge_zero_page_pmd(vma, haddr, pmd); > + return; > } > > /* > diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c > index c2210e1cdb51..4e640baf9794 100644 > --- a/mm/pgtable-generic.c > +++ b/mm/pgtable-generic.c > @@ -135,9 +135,8 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, > { > pmd_t pmd; > VM_BUG_ON(address & ~HPAGE_PMD_MASK); > - VM_BUG_ON(!pmd_present(*pmdp)); > - /* Below assumes pmd_present() is true */ > - VM_BUG_ON(!pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); > + VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && > + !pmd_devmap(*pmdp)); > pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); > flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); > return pmd; > -- > 2.32.0.rc0.204.g9fa02ecfa5-goog > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FB7AC47096 for ; Thu, 3 Jun 2021 21:27:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B82C1613D8 for ; Thu, 3 Jun 2021 21:27:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B82C1613D8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E00F46B0036; Thu, 3 Jun 2021 17:27:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DB17E6B006C; Thu, 3 Jun 2021 17:27:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C036F6B006E; Thu, 3 Jun 2021 17:27:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id 8F09D6B0036 for ; Thu, 3 Jun 2021 17:27:22 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 335A963F4 for ; Thu, 3 Jun 2021 21:27:22 +0000 (UTC) X-FDA: 78213698724.33.2D42CA3 Received: from mail-ej1-f52.google.com (mail-ej1-f52.google.com [209.85.218.52]) by imf11.hostedemail.com (Postfix) with ESMTP id BDDCB20021C1 for ; Thu, 3 Jun 2021 21:27:02 +0000 (UTC) Received: by mail-ej1-f52.google.com with SMTP id jt22so11338445ejb.7 for ; Thu, 03 Jun 2021 14:27:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=wWF0LmSUjm4yC2lK4bG3Fr4kMBCczV/fZvv9HdaYnZ4=; b=PrJbwPsGj1DMvzYWsww4J8+y80Fal2f70tt0bje+OTzNRrw2K/4++7uol694AwFzTO dXgGhJOaKjsWKab118zE4BwIu7/HlNCvtaJc25BBP+RhjUT3y93jkcERUNiZXLN4hgmK G+kTT9XwIm+4fgfW0r7hxN9xTmkXLcAjCAxg5SzaF91StpyTkYwVDa1ib3cRG3hwLblr 2IoF2naBKZHmhH/YcWgboy8vSCF4g4rIN02gWQHSe9aRXB3XZ+xhJdLdtTqt1cWfULvT deImbIwq8t/suqyXkLEs7JwSMX4QsW/+eqLMgb3ZXRhXktaMbs3PU6TXcU18pQ0w1AxI cDjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=wWF0LmSUjm4yC2lK4bG3Fr4kMBCczV/fZvv9HdaYnZ4=; b=B6EXLMDIn+sL3JV9302mjWgfvn6sFe64eZTYO1XLcut8CXdub5Of8Gjmx1+VH/svom 1j08xCLB4AkThtYuORHX1CYLYnbZeveXfFgyWiV0Wf8C1yX9PsEnHpzCf9cjvHonXPKJ oSPFKWPT7uCEIslKhoLFODIZ2HfoUaLu9VWe7zEqoP5qDNqMVowdF5721e1727AFciER bo7v4jvOwfppzEcyolY0haKK1jRdn4T0NN7vQDfPpJQ8inI35SfeDPToePkN5X8q0F0Z 562jCU05M/TDXPmkb1/yYxfRFA2V8q3FRJC0mEH/M85Byl/r6VQKBKW8XSEFUz5bRZ/y VPFQ== X-Gm-Message-State: AOAM533ingGo02YMKF3GQ/QuHSgcYTsiUr/eEEuElOb+eFIUB/b6VydX 94vboBv/fwmo6gLyP2KP69gI7YVioLuovxBg/GA= X-Google-Smtp-Source: ABdhPJyXeGiJtG4uW7GXIXAQabp0Quew9wW7zcB7/rCiirnP+e15B4xm56R4wQs9EyyEodENRKvHJ3t16/6nQBXB4Ic= X-Received: by 2002:a17:906:af95:: with SMTP id mj21mr1157992ejb.25.1622755628871; Thu, 03 Jun 2021 14:27:08 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Yang Shi Date: Thu, 3 Jun 2021 14:26:57 -0700 Message-ID: Subject: Re: [PATCH 1/7] mm/thp: fix __split_huge_pmd_locked() on shmem migration entry To: Hugh Dickins Cc: Andrew Morton , "Kirill A. Shutemov" , Wang Yugui , Matthew Wilcox , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , Linux MM , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=PrJbwPsG; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of shy828301@gmail.com designates 209.85.218.52 as permitted sender) smtp.mailfrom=shy828301@gmail.com X-Stat-Signature: gosbmp6wjrn4j8bm6i1scy76tz3hhqe9 X-Rspamd-Queue-Id: BDDCB20021C1 X-Rspamd-Server: rspam02 X-HE-Tag: 1622755622-270015 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 1, 2021 at 2:05 PM Hugh Dickins wrote: > > Stressing huge tmpfs page migration racing hole punch often crashed on the > VM_BUG_ON(!pmd_present) in pmdp_huge_clear_flush(), with DEBUG_VM=y kernel; > or shortly afterwards, on a bad dereference in __split_huge_pmd_locked() > when DEBUG_VM=n. They forgot to allow for pmd migration entries in the > non-anonymous case. > > Full disclosure: those particular experiments were on a kernel with more > relaxed mmap_lock and i_mmap_rwsem locking, and were not repeated on the > vanilla kernel: it is conceivable that stricter locking happens to avoid > those cases, or makes them less likely; but __split_huge_pmd_locked() > already allowed for pmd migration entries when handling anonymous THPs, > so this commit brings the shmem and file THP handling into line. > > Are there more places that need to be careful about pmd migration entries? > None hit in practice, but several of those is_huge_zero_pmd() tests were > done without checking pmd_present() first: I believe a pmd migration entry > could end up satisfying that test. Ah, the inversion of swap offset, to > protect against L1TF, makes that impossible on x86; but other arches need > the pmd_present() check, and even x86 ought not to apply pmd_page() to a > swap-like pmd. Fix those instances; __split_huge_pmd_locked() was not > wrong to be checking with pmd_trans_huge() instead, but I think it's > clearer to use pmd_present() in each instance. > > And while there: make it clearer to the eye that the !vma_is_anonymous() > and is_huge_zero_pmd() blocks make early returns (and don't return void). > > Fixes: e71769ae5260 ("mm: enable thp migration for shmem thp") > Signed-off-by: Hugh Dickins > Cc: > --- > mm/huge_memory.c | 38 ++++++++++++++++++++++++-------------- > mm/pgtable-generic.c | 5 ++--- > 2 files changed, 26 insertions(+), 17 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 63ed6b25deaa..9fb7b47da87e 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1587,9 +1587,6 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > goto out_unlocked; > > orig_pmd = *pmd; > - if (is_huge_zero_pmd(orig_pmd)) > - goto out; > - > if (unlikely(!pmd_present(orig_pmd))) { > VM_BUG_ON(thp_migration_supported() && > !is_pmd_migration_entry(orig_pmd)); > @@ -1597,6 +1594,9 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > } > > page = pmd_page(orig_pmd); > + if (is_huge_zero_page(page)) > + goto out; > + > /* > * If other processes are mapping this page, we couldn't discard > * the page unless they all do MADV_FREE so let's skip the page. > @@ -1676,7 +1676,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > spin_unlock(ptl); > if (is_huge_zero_pmd(orig_pmd)) > tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); > - } else if (is_huge_zero_pmd(orig_pmd)) { > + } else if (pmd_present(orig_pmd) && is_huge_zero_pmd(orig_pmd)) { If it is a huge zero migration entry, the code would fallback to the "else". But IIUC the "else" case doesn't handle the huge zero page correctly. It may mess up the rss counter. > zap_deposited_table(tlb->mm, pmd); > spin_unlock(ptl); > tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); > @@ -2044,7 +2044,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > count_vm_event(THP_SPLIT_PMD); > > if (!vma_is_anonymous(vma)) { > - _pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd); > + old_pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd); > /* > * We are going to unmap this huge page. So > * just go ahead and zap it > @@ -2053,16 +2053,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > zap_deposited_table(mm, pmd); > if (vma_is_special_huge(vma)) > return; > - page = pmd_page(_pmd); > - if (!PageDirty(page) && pmd_dirty(_pmd)) > - set_page_dirty(page); > - if (!PageReferenced(page) && pmd_young(_pmd)) > - SetPageReferenced(page); > - page_remove_rmap(page, true); > - put_page(page); > + if (unlikely(is_pmd_migration_entry(old_pmd))) { > + swp_entry_t entry; > + > + entry = pmd_to_swp_entry(old_pmd); > + page = migration_entry_to_page(entry); > + } else { > + page = pmd_page(old_pmd); > + if (!PageDirty(page) && pmd_dirty(old_pmd)) > + set_page_dirty(page); > + if (!PageReferenced(page) && pmd_young(old_pmd)) > + SetPageReferenced(page); > + page_remove_rmap(page, true); > + put_page(page); > + } > add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR); > return; > - } else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) { > + } > + > + if (pmd_present(*pmd) && is_huge_zero_pmd(*pmd)) { > /* > * FIXME: Do we want to invalidate secondary mmu by calling > * mmu_notifier_invalidate_range() see comments below inside > @@ -2072,7 +2081,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > * small page also write protected so it does not seems useful > * to invalidate secondary mmu at this time. > */ > - return __split_huge_zero_page_pmd(vma, haddr, pmd); > + __split_huge_zero_page_pmd(vma, haddr, pmd); > + return; > } > > /* > diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c > index c2210e1cdb51..4e640baf9794 100644 > --- a/mm/pgtable-generic.c > +++ b/mm/pgtable-generic.c > @@ -135,9 +135,8 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address, > { > pmd_t pmd; > VM_BUG_ON(address & ~HPAGE_PMD_MASK); > - VM_BUG_ON(!pmd_present(*pmdp)); > - /* Below assumes pmd_present() is true */ > - VM_BUG_ON(!pmd_trans_huge(*pmdp) && !pmd_devmap(*pmdp)); > + VM_BUG_ON(pmd_present(*pmdp) && !pmd_trans_huge(*pmdp) && > + !pmd_devmap(*pmdp)); > pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); > flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); > return pmd; > -- > 2.32.0.rc0.204.g9fa02ecfa5-goog >