From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.8 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38379C47096 for ; Fri, 4 Jun 2021 02:23:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AB93D61407 for ; Fri, 4 Jun 2021 02:23:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB93D61407 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 364676B0036; Thu, 3 Jun 2021 22:23:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 33BFB6B006C; Thu, 3 Jun 2021 22:23:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18F426B006E; Thu, 3 Jun 2021 22:23:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0222.hostedemail.com [216.40.44.222]) by kanga.kvack.org (Postfix) with ESMTP id DA0096B0036 for ; Thu, 3 Jun 2021 22:23:18 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6ECA1180AD806 for ; Fri, 4 Jun 2021 02:23:18 +0000 (UTC) X-FDA: 78214444476.22.0B73A8E Received: from mail-qv1-f47.google.com (mail-qv1-f47.google.com [209.85.219.47]) by imf15.hostedemail.com (Postfix) with ESMTP id EA5F0A000257 for ; Fri, 4 Jun 2021 02:23:05 +0000 (UTC) Received: by mail-qv1-f47.google.com with SMTP id z1so4267565qvo.4 for ; Thu, 03 Jun 2021 19:23:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=6gtl5ml3oidODRLTiq6SWXD7TlK86nyhuDo2wphJy1I=; b=nieY8GPV5VcbWLx7nkMjysrhPRr8sWODb8bEwy1mMCBVbAMQnmQWWK4pjsrcN5Qaij FGMdxt4G9Q1XxXKzbPskpoeg/Oc4kfvHEaxX2n6HNO57NxA7zbZ4UoUfJfXZf2zYLdDF xQq/1EXKmvYZT8bbuZEY/CxJPtidOGfwfrbpTlGOv9EfebdWtwDO5XcpS1U2PPpe0gtb jTUHlEdetUxRZMMFv0Na+/It04KyHaJfyYhHESYggmFEOjh8T0DT9rUD9ILm9RYLo7m1 3/DsARInKvJlkUyjrYVnAwJLX1HDN19mSsFY/UV2XmziBkYNUyFT97NxN5OwTAaCAcye 0NrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=6gtl5ml3oidODRLTiq6SWXD7TlK86nyhuDo2wphJy1I=; b=UaKDKdbBPbynd7O1rfrDrA/Rk3BV32gJ8894e/gr0Ux0EShL+FZMk2pBJ+nhF34zqv g9DuLQGjqayy3EGndKOzO++CjmsM+Q0i/9f1L3fmBjhyaSnyGeMCxOOtUvO7K0cw9Cu3 08upBALqDkUDGQijK2d9RCvnsyVdQpL8bRQ9Vg4gnpMMz0em3IAmK+rD8ygLcUr3l+wd 2ap0lu2cgGzuXwUSPxJNmc3C5C6M3Z9+yp6eSoQTAdbdTXK5lSomPGtFqcpWPyEikjHc +7lgdTp1iM9qusdhDnW2zi2GaJxV6jdgsHhq7Y+jsSSEw91uJwFe1ginQ2YQClCe86G9 eZOA== X-Gm-Message-State: AOAM531Jy/ZP5jWjBiVODecftjO4o7zQWB2t93q495mB4/QC28MHsfDu dRvNfrDA1Op2Cdax7iPV5Hn1GQ== X-Google-Smtp-Source: ABdhPJwwB2zxuvULDFNA+UdpVFjdXE9dq+wyPQR+joCvQKgMl+S5FbnsWQiOEBUl0NGPRn11ZbH6lQ== X-Received: by 2002:a05:6214:260f:: with SMTP id gu15mr2627898qvb.21.1622773392735; Thu, 03 Jun 2021 19:23:12 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id k13sm3132765qki.24.2021.06.03.19.23.10 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Thu, 03 Jun 2021 19:23:12 -0700 (PDT) Date: Thu, 3 Jun 2021 19:22:59 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Yang Shi cc: Hugh Dickins , Andrew Morton , "Kirill A. Shutemov" , Wang Yugui , Matthew Wilcox , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , Linux MM , Linux Kernel Mailing List Subject: Re: [PATCH 1/7] mm/thp: fix __split_huge_pmd_locked() on shmem migration entry In-Reply-To: Message-ID: References: User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=nieY8GPV; dmarc=temperror reason="server fail" header.from=google.com (policy=temperror); spf=pass (imf15.hostedemail.com: domain of hughd@google.com designates 209.85.219.47 as permitted sender) smtp.mailfrom=hughd@google.com X-Stat-Signature: g48xrm4xomyjkzqsddswhyrc7jkydmuf X-Rspamd-Queue-Id: EA5F0A000257 X-Rspamd-Server: rspam02 X-HE-Tag: 1622773385-21853 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 3 Jun 2021, Yang Shi wrote: > On Tue, Jun 1, 2021 at 2:05 PM Hugh Dickins wrote: > > > > Are there more places that need to be careful about pmd migration entries? > > None hit in practice, but several of those is_huge_zero_pmd() tests were > > done without checking pmd_present() first: I believe a pmd migration entry > > could end up satisfying that test. Ah, the inversion of swap offset, to > > protect against L1TF, makes that impossible on x86; but other arches need > > the pmd_present() check, and even x86 ought not to apply pmd_page() to a > > swap-like pmd. Fix those instances; __split_huge_pmd_locked() was not > > wrong to be checking with pmd_trans_huge() instead, but I think it's > > clearer to use pmd_present() in each instance. ... > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index 63ed6b25deaa..9fb7b47da87e 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -1676,7 +1676,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > > spin_unlock(ptl); > > if (is_huge_zero_pmd(orig_pmd)) > > tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); > > - } else if (is_huge_zero_pmd(orig_pmd)) { > > + } else if (pmd_present(orig_pmd) && is_huge_zero_pmd(orig_pmd)) { > > If it is a huge zero migration entry, the code would fallback to the > "else". But IIUC the "else" case doesn't handle the huge zero page > correctly. It may mess up the rss counter. A huge zero migration entry? I hope that's not something special that I've missed. Do we ever migrate a huge zero page - and how do we find where it's mapped, to insert the migration entries? But if we do, I thought it would use the usual kind of pmd migration entry; and the first check in is_pmd_migration_entry() is !pmd_present(pmd). (I have to be rather careful to check such details, after getting burnt once by pmd_present(): which includes the "huge" bit even when not otherwise present, to permit races with pmdp_invalidate(). I mentioned in private mail that I'd dropped one of my "fixes" because it was harmless but mistaken: I had misunderstood pmd_present().) The point here (see commit message above) is that some unrelated pmd migration entry could pass the is_huge_zero_pmd() test, which rushes off to use pmd_page() without even checking pmd_present() first. And most of its users have, one way or another, checked pmd_present() first; but this place and a couple of others had not. I'm just verifying that it's really a a huge zero pmd before handling its case; the "else" still does not need to handle the huge zero page. Hugh