From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E940CC43331 for ; Fri, 27 Mar 2020 20:46:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5FA90206DB for ; Fri, 27 Mar 2020 20:46:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="EsFr/UuU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5FA90206DB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A74B56B0036; Fri, 27 Mar 2020 16:46:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A25BC6B006E; Fri, 27 Mar 2020 16:46:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 914EA6B0072; Fri, 27 Mar 2020 16:46:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id 770816B0036 for ; Fri, 27 Mar 2020 16:46:09 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 44E23180AD80F for ; Fri, 27 Mar 2020 20:46:09 +0000 (UTC) X-FDA: 76642324458.28.plot92_8cfc953de4748 X-HE-Tag: plot92_8cfc953de4748 X-Filterd-Recvd-Size: 7178 Received: from mail-ed1-f65.google.com (mail-ed1-f65.google.com [209.85.208.65]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Fri, 27 Mar 2020 20:46:08 +0000 (UTC) Received: by mail-ed1-f65.google.com with SMTP id a20so12963065edj.2 for ; Fri, 27 Mar 2020 13:46:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=7vL3iO9JiDFVO+iu/iS2D/h6W6favjm/f3dgl2deRQE=; b=EsFr/UuU5dpDMXpnrhEE6kHwwiZjficTgF7BSwp1Dn9JpshoQsB7rmDiaZp59n5Bnn 6gY8Wj9Nmx/3TswWk8mvjZpZDKsJCZeHQDOMJrvxi5Yne61Tv8XNt2vYUOwSFnj035vs tsyaZL2NBzj2l1JBLlhYuZk6Mhw0qpsleTgXSNMmnpfPs5wvNN/8TqJZbDWRbHwK+cw/ MM7yWrhpldEa6yYILefsbOP3CnfF3vc48QTK1iHyZJ+rE24iBeA2t9vHjKWOIYMFAesX F3kU+hNBt3I9mk25YP0CzEtNMcZSQA/qmdyjwxH6Cc+GWYygPqm+UZ9Ig+60wTXFxQQq TKgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=7vL3iO9JiDFVO+iu/iS2D/h6W6favjm/f3dgl2deRQE=; b=peFMzR70pmuL7/66dSqFN1Fb7noKwaWy5YQmZO73xsA8T3XQjoeNBaHh2TiJS2shr0 kcanHaikEvv+egeA583ZNttY0TqgABhP4hGi1KOuBW3N4q8/qGL1be3F/7WIc2vchaIA Cs/hfs/Y2RJhP7aLP9fdbImPHwi1xaOIzNSS7gxI6guFDkEGON+DLgfHzFUuCdcNsslN 5//Ot3r17tAJ6Rbbc+skj/R0Otu0hXm3jKejKkKgZQvUwhvFaS9cvSS84ALSNwlIEiwR JLxR9+tj9DjGmoL1HT1CXDnaz4eHX9w+iIerKxjunD09LAnXn5gDaPaMPPYsZfL+z1jp Fopg== X-Gm-Message-State: ANhLgQ2v/xIsGo5mL/SdVRHP7uEMnYKWDj2W9lrFzHo3cive42zovv/2 5FRlSn0N3E8MHJBMfElGeNmPX0/3fLE74nYZIJQ= X-Google-Smtp-Source: ADFU+vuK5+Wr+yXkj5m7HEoYnH0hoa7KC8TFzE49oH2INuPWKGbSYQ1mRr8XpA9Xa0Xb376qOAXVolV/Xko5jRooVX0= X-Received: by 2002:a50:930e:: with SMTP id m14mr976638eda.256.1585341967651; Fri, 27 Mar 2020 13:46:07 -0700 (PDT) MIME-Version: 1.0 References: <20200327170601.18563-1-kirill.shutemov@linux.intel.com> <20200327170601.18563-6-kirill.shutemov@linux.intel.com> In-Reply-To: <20200327170601.18563-6-kirill.shutemov@linux.intel.com> From: Yang Shi Date: Fri, 27 Mar 2020 13:45:55 -0700 Message-ID: Subject: Re: [PATCH 5/7] khugepaged: Allow to collapse PTE-mapped compound pages To: "Kirill A. Shutemov" Cc: Andrew Morton , Andrea Arcangeli , Linux MM , Linux Kernel Mailing List , "Kirill A. Shutemov" Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Mar 27, 2020 at 10:06 AM Kirill A. Shutemov wrote: > > We can collapse PTE-mapped compound pages. We only need to avoid > handling them more than once: lock/unlock page only once if it's present > in the PMD range multiple times as it handled on compound level. The > same goes for LRU isolation and putpack. > > Signed-off-by: Kirill A. Shutemov > --- > mm/khugepaged.c | 41 +++++++++++++++++++++++++++++++---------- > 1 file changed, 31 insertions(+), 10 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index b47edfe57f7b..c8c2c463095c 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -515,6 +515,17 @@ void __khugepaged_exit(struct mm_struct *mm) > > static void release_pte_page(struct page *page) > { > + /* > + * We need to unlock and put compound page on LRU only once. > + * The rest of the pages have to be locked and not on LRU here. > + */ > + VM_BUG_ON_PAGE(!PageCompound(page) && > + (!PageLocked(page) && PageLRU(page)), page); > + > + if (!PageLocked(page)) > + return; > + > + page = compound_head(page); > dec_node_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); > unlock_page(page); > putback_lru_page(page); BTW, wouldn't this unlock the whole THP and put it back to LRU? Then we may copy the following PTE mapped pages with page unlocked and on LRU. I don't see critical problem, just the pages might be on and off LRU by others, i.e. vmscan, compaction, migration, etc. But no one could take the page away since try_to_unmap() would fail, but not very productive. > @@ -537,6 +548,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > pte_t *_pte; > int none_or_zero = 0, result = 0, referenced = 0; > bool writable = false; > + LIST_HEAD(compound_pagelist); > > for (_pte = pte; _pte < pte+HPAGE_PMD_NR; > _pte++, address += PAGE_SIZE) { > @@ -561,13 +573,23 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > goto out; > } > > - /* TODO: teach khugepaged to collapse THP mapped with pte */ > + VM_BUG_ON_PAGE(!PageAnon(page), page); > + > if (PageCompound(page)) { > - result = SCAN_PAGE_COMPOUND; > - goto out; > - } > + struct page *p; > + page = compound_head(page); > > - VM_BUG_ON_PAGE(!PageAnon(page), page); > + /* > + * Check if we have dealt with the compount page > + * already > + */ > + list_for_each_entry(p, &compound_pagelist, lru) { > + if (page == p) > + break; > + } > + if (page == p) > + continue; > + } > > /* > * We can do it before isolate_lru_page because the > @@ -640,6 +662,9 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > page_is_young(page) || PageReferenced(page) || > mmu_notifier_test_young(vma->vm_mm, address)) > referenced++; > + > + if (PageCompound(page)) > + list_add_tail(&page->lru, &compound_pagelist); > } > if (likely(writable)) { > if (likely(referenced)) { > @@ -1185,11 +1210,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > goto out_unmap; > } > > - /* TODO: teach khugepaged to collapse THP mapped with pte */ > - if (PageCompound(page)) { > - result = SCAN_PAGE_COMPOUND; > - goto out_unmap; > - } > + page = compound_head(page); > > /* > * Record which node the original page is from and save this > -- > 2.26.0 > >