From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88E21C2D0A3 for ; Mon, 26 Oct 2020 11:55:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 26A5622284 for ; Mon, 26 Oct 2020 11:55:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="abfek0Dk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 26A5622284 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E30B6B006E; Mon, 26 Oct 2020 07:55:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 56ABC6B0070; Mon, 26 Oct 2020 07:55:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 433496B0071; Mon, 26 Oct 2020 07:55:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0019.hostedemail.com [216.40.44.19]) by kanga.kvack.org (Postfix) with ESMTP id 18ED46B006E for ; Mon, 26 Oct 2020 07:55:56 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id ADED71EE6 for ; Mon, 26 Oct 2020 11:55:55 +0000 (UTC) X-FDA: 77413922670.20.boat54_5d1239c27273 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 87DEE180C07A3 for ; Mon, 26 Oct 2020 11:55:55 +0000 (UTC) X-HE-Tag: boat54_5d1239c27273 X-Filterd-Recvd-Size: 8205 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Mon, 26 Oct 2020 11:55:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1603713354; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G8HQyHiJi8lNhKdgT87H/9jAZEMb+E8WQwouLjsaWes=; b=abfek0DkJ9K/P0nDYXmnjxG2/m9H6CLkQlqyZpYFnsOOmqmzfexcLDsRNiJ9SVZPIU5ZZP TZWFa0dFc49RW5nDg+cf0zxfTYK/25xbOQYK6GwQ5WJd7GnLG359gW4WjQeb6KnIQeV5yM wFWzBcxBYprAq1wHu1VOl851X9kV3DM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-312-fdjIXLUsORWYQaL3CV17Hw-1; Mon, 26 Oct 2020 07:55:52 -0400 X-MC-Unique: fdjIXLUsORWYQaL3CV17Hw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B28548049D9; Mon, 26 Oct 2020 11:55:46 +0000 (UTC) Received: from [10.36.113.62] (ovpn-113-62.ams2.redhat.com [10.36.113.62]) by smtp.corp.redhat.com (Postfix) with ESMTP id 451C56EF58; Mon, 26 Oct 2020 11:55:38 +0000 (UTC) Subject: Re: [PATCH 1/4] mm: introduce debug_pagealloc_map_pages() helper To: Mike Rapoport Cc: Andrew Morton , Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org References: <20201025101555.3057-1-rppt@kernel.org> <20201025101555.3057-2-rppt@kernel.org> <8720c067-7dc5-2b02-918b-e54dd642bfd6@redhat.com> <20201026115443.GF1154158@kernel.org> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <67e342cd-5ac4-4ba5-77f7-946c9415534e@redhat.com> Date: Mon, 26 Oct 2020 12:55:37 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.3.1 MIME-Version: 1.0 In-Reply-To: <20201026115443.GF1154158@kernel.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 26.10.20 12:54, Mike Rapoport wrote: > On Mon, Oct 26, 2020 at 12:05:13PM +0100, David Hildenbrand wrote: >> On 25.10.20 11:15, Mike Rapoport wrote: >>> From: Mike Rapoport >>> >>> When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the >>> kernel direct mapping after free_pages(). The pages than need to be >>> mapped back before they could be used. Theese mapping operations use >>> __kernel_map_pages() guarded with with debug_pagealloc_enabled(). >>> >>> The only place that calls __kernel_map_pages() without checking >>> whether DEBUG_PAGEALLOC is enabled is the hibernation code that >>> presumes availability of this function when ARCH_HAS_SET_DIRECT_MAP >>> is set. Still, on arm64, __kernel_map_pages() will bail out when >>> DEBUG_PAGEALLOC is not enabled but set_direct_map_invalid_noflush() >>> may render some pages not present in the direct map and hibernation >>> code won't be able to save such pages. >>> >>> To make page allocation debugging and hibernation interaction more >>> robust, the dependency on DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP >>> has to be made more explicit. >>> >>> Start with combining the guard condition and the call to >>> __kernel_map_pages() into a single debug_pagealloc_map_pages() >>> function to emphasize that __kernel_map_pages() should not be called >>> without DEBUG_PAGEALLOC and use this new function to map/unmap pages >>> when page allocation debug is enabled. >>> >>> As the only remaining user of kernel_map_pages() is the hibernation >>> code, mode that function into kernel/power/snapshot.c closer to a >>> caller. >> >> s/mode/move/ >> >>> >>> Signed-off-by: Mike Rapoport --- >>> include/linux/mm.h | 16 +++++++--------- kernel/power/snapshot.c >>> | 11 +++++++++++ mm/memory_hotplug.c | 3 +-- mm/page_alloc.c >>> | 6 ++---- mm/slab.c | 8 +++----- 5 files changed, 24 >>> insertions(+), 20 deletions(-) >>> >>> diff --git a/include/linux/mm.h b/include/linux/mm.h index >>> ef360fe70aaf..14e397f3752c 100644 --- a/include/linux/mm.h +++ >>> b/include/linux/mm.h @@ -2927,21 +2927,19 @@ static inline bool >>> debug_pagealloc_enabled_static(void) #if >>> defined(CONFIG_DEBUG_PAGEALLOC) || >>> defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) extern void >>> __kernel_map_pages(struct page *page, int numpages, int enable); >>> >>> -/* - * When called in DEBUG_PAGEALLOC context, the call should most >>> likely be - * guarded by debug_pagealloc_enabled() or >>> debug_pagealloc_enabled_static() - */ -static inline void >>> -kernel_map_pages(struct page *page, int numpages, int enable) >>> +static inline void debug_pagealloc_map_pages(struct page *page, + >>> int numpages, int enable) { - __kernel_map_pages(page, numpages, >>> enable); + if (debug_pagealloc_enabled_static()) + >>> __kernel_map_pages(page, numpages, enable); } + #ifdef >>> CONFIG_HIBERNATION extern bool kernel_page_present(struct page >>> *page); #endif /* CONFIG_HIBERNATION */ #else /* >>> CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ -static >>> inline void -kernel_map_pages(struct page *page, int numpages, int >>> enable) {} +static inline void debug_pagealloc_map_pages(struct page >>> *page, + int numpages, int enable) {} #ifdef >>> CONFIG_HIBERNATION static inline bool kernel_page_present(struct page >>> *page) { return true; } #endif /* CONFIG_HIBERNATION */ diff --git >>> a/kernel/power/snapshot.c b/kernel/power/snapshot.c index >>> 46b1804c1ddf..fa499466f645 100644 --- a/kernel/power/snapshot.c +++ >>> b/kernel/power/snapshot.c @@ -76,6 +76,17 @@ static inline void >>> hibernate_restore_protect_page(void *page_address) {} static inline >>> void hibernate_restore_unprotect_page(void *page_address) {} #endif >>> /* CONFIG_STRICT_KERNEL_RWX && CONFIG_ARCH_HAS_SET_MEMORY */ >>> >>> +#if defined(CONFIG_DEBUG_PAGEALLOC) || >>> defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) +static inline void >>> +kernel_map_pages(struct page *page, int numpages, int enable) +{ + >>> __kernel_map_pages(page, numpages, enable); +} +#else +static inline >>> void +kernel_map_pages(struct page *page, int numpages, int enable) >>> {} +#endif + >> >> That change should go into a separate patch. > > Hmm, I beleive you refer to moving kernel_map_pages() to snapshot.c, > right? Sorry, yes! -- Thanks, David / dhildenb