From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6938CC43381 for ; Thu, 21 Feb 2019 18:17:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 41C4A2083B for ; Thu, 21 Feb 2019 18:17:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728273AbfBUSRo (ORCPT ); Thu, 21 Feb 2019 13:17:44 -0500 Received: from mx1.redhat.com ([209.132.183.28]:54024 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725866AbfBUSRn (ORCPT ); Thu, 21 Feb 2019 13:17:43 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DE04A59446; Thu, 21 Feb 2019 18:17:42 +0000 (UTC) Received: from redhat.com (unknown [10.20.6.236]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DC35660BE6; Thu, 21 Feb 2019 18:17:36 +0000 (UTC) Date: Thu, 21 Feb 2019 13:17:34 -0500 From: Jerome Glisse To: Peter Xu Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, David Hildenbrand , Hugh Dickins , Maya Gokhale , Pavel Emelyanov , Johannes Weiner , Martin Cracauer , Shaohua Li , Marty McFadden , Andrea Arcangeli , Mike Kravetz , Denis Plotnikov , Mike Rapoport , Mel Gorman , "Kirill A . Shutemov" , "Dr . David Alan Gilbert" Subject: Re: [PATCH v2 18/26] khugepaged: skip collapse if uffd-wp detected Message-ID: <20190221181734.GR2813@redhat.com> References: <20190212025632.28946-1-peterx@redhat.com> <20190212025632.28946-19-peterx@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20190212025632.28946-19-peterx@redhat.com> User-Agent: Mutt/1.10.0 (2018-05-17) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 21 Feb 2019 18:17:43 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 12, 2019 at 10:56:24AM +0800, Peter Xu wrote: > Don't collapse the huge PMD if there is any userfault write protected > small PTEs. The problem is that the write protection is in small page > granularity and there's no way to keep all these write protection > information if the small pages are going to be merged into a huge PMD. > > The same thing needs to be considered for swap entries and migration > entries. So do the check as well disregarding khugepaged_max_ptes_swap. > > Signed-off-by: Peter Xu Reviewed-by: Jérôme Glisse > --- > include/trace/events/huge_memory.h | 1 + > mm/khugepaged.c | 23 +++++++++++++++++++++++ > 2 files changed, 24 insertions(+) > > diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h > index dd4db334bd63..2d7bad9cb976 100644 > --- a/include/trace/events/huge_memory.h > +++ b/include/trace/events/huge_memory.h > @@ -13,6 +13,7 @@ > EM( SCAN_PMD_NULL, "pmd_null") \ > EM( SCAN_EXCEED_NONE_PTE, "exceed_none_pte") \ > EM( SCAN_PTE_NON_PRESENT, "pte_non_present") \ > + EM( SCAN_PTE_UFFD_WP, "pte_uffd_wp") \ > EM( SCAN_PAGE_RO, "no_writable_page") \ > EM( SCAN_LACK_REFERENCED_PAGE, "lack_referenced_page") \ > EM( SCAN_PAGE_NULL, "page_null") \ > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 4f017339ddb2..396c7e4da83e 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -29,6 +29,7 @@ enum scan_result { > SCAN_PMD_NULL, > SCAN_EXCEED_NONE_PTE, > SCAN_PTE_NON_PRESENT, > + SCAN_PTE_UFFD_WP, > SCAN_PAGE_RO, > SCAN_LACK_REFERENCED_PAGE, > SCAN_PAGE_NULL, > @@ -1123,6 +1124,15 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > pte_t pteval = *_pte; > if (is_swap_pte(pteval)) { > if (++unmapped <= khugepaged_max_ptes_swap) { > + /* > + * Always be strict with uffd-wp > + * enabled swap entries. Please see > + * comment below for pte_uffd_wp(). > + */ > + if (pte_swp_uffd_wp(pteval)) { > + result = SCAN_PTE_UFFD_WP; > + goto out_unmap; > + } > continue; > } else { > result = SCAN_EXCEED_SWAP_PTE; > @@ -1142,6 +1152,19 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > result = SCAN_PTE_NON_PRESENT; > goto out_unmap; > } > + if (pte_uffd_wp(pteval)) { > + /* > + * Don't collapse the page if any of the small > + * PTEs are armed with uffd write protection. > + * Here we can also mark the new huge pmd as > + * write protected if any of the small ones is > + * marked but that could bring uknown > + * userfault messages that falls outside of > + * the registered range. So, just be simple. > + */ > + result = SCAN_PTE_UFFD_WP; > + goto out_unmap; > + } > if (pte_write(pteval)) > writable = true; > > -- > 2.17.1 >