From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BF6DC433EF for ; Tue, 15 Feb 2022 02:41:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C8A36B0078; Mon, 14 Feb 2022 21:41:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 878E46B007D; Mon, 14 Feb 2022 21:41:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7405A6B007E; Mon, 14 Feb 2022 21:41:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0239.hostedemail.com [216.40.44.239]) by kanga.kvack.org (Postfix) with ESMTP id 63FB96B0078 for ; Mon, 14 Feb 2022 21:41:00 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 15FA3180AD837 for ; Tue, 15 Feb 2022 02:41:00 +0000 (UTC) X-FDA: 79143461880.14.B2E5D62 Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) by imf20.hostedemail.com (Postfix) with ESMTP id 84B721C0006 for ; Tue, 15 Feb 2022 02:40:59 +0000 (UTC) Received: by mail-qv1-f54.google.com with SMTP id fh9so16510058qvb.1 for ; Mon, 14 Feb 2022 18:40:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=4LCswZrElV6Pmzb8YyUA7fug85GktLISWrDlYL4+cJU=; b=RK8Rg5/4qjjIjh2MTmUdnU7ZE6Ocg/1c/4CORITP3/BkGS1yJqcNkG4BUYhF3CsYX7 XNdKA9GUjgoh2Xw/MQZrmTMn98R8uxZp/nDVs2vc6bQ3D2VZ9zBBrjuJZ1MYtsEK7OpY ric3eQtbdsLIvZ3djy3yq/LSIvgsGyPLvO0Fo31gqGU8Q0uhhfMtJVKSzSm1GNxuhh2D NcWiyqOB4fIBiqJ6eIWhcEj2nkTnwxVYrKT6TP00O81qEGGsHr3do/QRqhygbnlSg0zf aopiU81m8kQKUrsRc6VhRrcX0ffKj4+M/8Z8aU/3XHNigrR3fv536/y/lnNuXm47bxK5 a0GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=4LCswZrElV6Pmzb8YyUA7fug85GktLISWrDlYL4+cJU=; b=ZTeclmxmnldGwjXsb/79yyBmRNmlwJG9saGdtbTKrT6l2VmgMtGC05g6eSdpQNrpFh YxG7mvO8DMVTMkm/q+C/YBqwov3U13ThHa4NyhW4PR2PGV/Ghm/OUd72zmBpWlLd/tom fTcKGv5fCBb9e/2wySNOlwkzDno4Ju9jvcZh9Y1ZdECkEIVx0A+540tnzlSu5+WK0nLh kg6Nz0Z5NOniXbg6FfPZi7u26tx8u2kao5H/wdbCepZY815WSA8DvwlyB/vHTiuJk8d3 WVtUVLV7eQ87GTu9vXt850q90AVBP033su6tcLIUww2SNjs8mtb79iUDMKsSshp+BFOb 7udQ== X-Gm-Message-State: AOAM532NsM/BCK+D+OykcHM7cw92heBskIjeeMt0hjPB0vrVbZ4kTVqc wQ8xVI6g38BI9eYAR3Xg4uoJgQ== X-Google-Smtp-Source: ABdhPJwoOBqHBajFmA50x9euRGxs5w57fNsAgoxQwuf0kGlCmHaynY42ZIqWK7LLuu2RSLh3vtEqlQ== X-Received: by 2002:a05:6214:627:: with SMTP id a7mr1412718qvx.91.1644892858736; Mon, 14 Feb 2022 18:40:58 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id m9sm17437831qkn.81.2022.02.14.18.40.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Feb 2022 18:40:57 -0800 (PST) Date: Mon, 14 Feb 2022 18:40:55 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Michal Hocko , Vlastimil Babka , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Alistair Popple , Johannes Weiner , Rik van Riel , Suren Baghdasaryan , Yu Zhao , Greg Thelen , Shakeel Butt , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 12/13] mm/thp: collapse_file() do try_to_unmap(TTU_BATCH_FLUSH) In-Reply-To: <55a49083-37f9-3766-1de9-9feea7428ac@google.com> Message-ID: References: <55a49083-37f9-3766-1de9-9feea7428ac@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: 84B721C0006 X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="RK8Rg5/4"; spf=pass (imf20.hostedemail.com: domain of hughd@google.com designates 209.85.219.54 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: i35hpfgqt9kthsbd9dn87ut5icazd596 X-Rspamd-Server: rspam03 X-HE-Tag: 1644892859-118580 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: collapse_file() is using unmap_mapping_pages(1) on each small page found mapped, unlike others (reclaim, migration, splitting, memory-failure) who use try_to_unmap(). There are four advantages to try_to_unmap(): first, its TTU_IGNORE_MLOCK option now avoids leaving mlocked page in pagevec; second, its vma lookup uses i_mmap_lock_read() not i_mmap_lock_write(); third, it breaks out early if page is not mapped everywhere it might be; fourth, its TTU_BATCH_FLUSH option can be used, as in page reclaim, to save up all the TLB flushing until all of the pages have been unmapped. Wild guess: perhaps it was originally written to use try_to_unmap(), but hit the VM_BUG_ON_PAGE(page_mapped) after unmapping, because without TTU_SYNC it may skip page table locks; but unmap_mapping_pages() never skips them, so fixed the issue. I did once hit that VM_BUG_ON_PAGE() since making this change: we could pass TTU_SYNC here, but I think just delete the check - the race is very rare, this is an ordinary small page so we don't need to be so paranoid about mapcount surprises, and the page_ref_freeze() just below already handles the case adequately. Signed-off-by: Hugh Dickins --- v2: same as v1. mm/khugepaged.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index d5e387c58bde..e0883a33efd6 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1829,13 +1829,12 @@ static void collapse_file(struct mm_struct *mm, } if (page_mapped(page)) - unmap_mapping_pages(mapping, index, 1, false); + try_to_unmap(page, TTU_IGNORE_MLOCK | TTU_BATCH_FLUSH); xas_lock_irq(&xas); xas_set(&xas, index); VM_BUG_ON_PAGE(page != xas_load(&xas), page); - VM_BUG_ON_PAGE(page_mapped(page), page); /* * The page is expected to have page_count() == 3: @@ -1899,6 +1898,13 @@ static void collapse_file(struct mm_struct *mm, xas_unlock_irq(&xas); xa_unlocked: + /* + * If collapse is successful, flush must be done now before copying. + * If collapse is unsuccessful, does flush actually need to be done? + * Do it anyway, to clear the state. + */ + try_to_unmap_flush(); + if (result == SCAN_SUCCEED) { struct page *page, *tmp; -- 2.34.1