All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mike Rapoport <rppt@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Albert Ou <aou@eecs.berkeley.edu>,
	Andy Lutomirski <luto@kernel.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Christoph Lameter <cl@linux.com>,
	"David S. Miller" <davem@davemloft.net>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	David Hildenbrand <david@redhat.com>,
	David Rientjes <rientjes@google.com>,
	"Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Len Brown <len.brown@intel.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Mike Rapoport <rppt@kernel.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Mackerras <paulus@samba.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Pavel Machek <pavel@ucw.cz>, Pekka Enberg <penberg@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vasily Gorbik <gor@linux.ibm.com>, Will Deacon <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	sparclinux@vger.kernel.org, x86@kernel.org
Subject: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map
Date: Sun, 25 Oct 2020 12:15:53 +0200	[thread overview]
Message-ID: <20201025101555.3057-3-rppt@kernel.org> (raw)
In-Reply-To: <20201025101555.3057-1-rppt@kernel.org>

From: Mike Rapoport <rppt@linux.ibm.com>

When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.

On arm64 it is possible that a page would be removed from the direct map
using set_direct_map_invalid_noflush() but __kernel_map_pages() will refuse
to map this page back if DEBUG_PAGEALLOC is disabled.

Explicitly use set_direct_map_{default,invalid}_noflush() for
ARCH_HAS_SET_DIRECT_MAP case and debug_pagealloc_map_pages() for
DEBUG_PAGEALLOC case.

While on that, rename kernel_map_pages() to hibernate_map_page() and drop
numpages parameter.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 kernel/power/snapshot.c | 29 +++++++++++++++++++----------
 1 file changed, 19 insertions(+), 10 deletions(-)

diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index fa499466f645..ecb7b32ce77c 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -76,16 +76,25 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
 static inline void hibernate_restore_unprotect_page(void *page_address) {}
 #endif /* CONFIG_STRICT_KERNEL_RWX  && CONFIG_ARCH_HAS_SET_MEMORY */
 
-#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable)
+static inline void hibernate_map_page(struct page *page, int enable)
 {
-	__kernel_map_pages(page, numpages, enable);
+	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
+		unsigned long addr = (unsigned long)page_address(page);
+		int ret;
+
+		if (enable)
+			ret = set_direct_map_default_noflush(page);
+		else
+			ret = set_direct_map_invalid_noflush(page);
+
+		if (WARN_ON(ret))
+			return;
+
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+	} else {
+		debug_pagealloc_map_pages(page, 1, enable);
+	}
 }
-#else
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable) {}
-#endif
 
 static int swsusp_page_is_free(struct page *);
 static void swsusp_set_page_forbidden(struct page *);
@@ -1366,9 +1375,9 @@ static void safe_copy_page(void *dst, struct page *s_page)
 	if (kernel_page_present(s_page)) {
 		do_copy_page(dst, page_address(s_page));
 	} else {
-		kernel_map_pages(s_page, 1, 1);
+		hibernate_map_page(s_page, 1);
 		do_copy_page(dst, page_address(s_page));
-		kernel_map_pages(s_page, 1, 0);
+		hibernate_map_page(s_page, 0);
 	}
 }
 
-- 
2.28.0


WARNING: multiple messages have this Message-ID (diff)
From: Mike Rapoport <rppt@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	linux-mm@kvack.org, Paul Mackerras <paulus@samba.org>,
	Pavel Machek <pavel@ucw.cz>, "H. Peter Anvin" <hpa@zytor.com>,
	sparclinux@vger.kernel.org, Christoph Lameter <cl@linux.com>,
	Will Deacon <will@kernel.org>,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	Michael Ellerman <mpe@ellerman.id.au>,
	x86@kernel.org, Mike Rapoport <rppt@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Len Brown <len.brown@intel.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Vasily Gorbik <gor@linux.ibm.com>,
	linux-pm@vger.kernel.org, Heiko Carstens <hca@linux.ibm.com>,
	David Rientjes <rientjes@google.com>,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-arm-kernel@lists.infradead.org,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, Pekka Enberg <penberg@kernel.org>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	"Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
	linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>,
	Mike Rapoport <rppt@kernel.org>
Subject: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map
Date: Sun, 25 Oct 2020 10:15:53 +0000	[thread overview]
Message-ID: <20201025101555.3057-3-rppt@kernel.org> (raw)
In-Reply-To: <20201025101555.3057-1-rppt@kernel.org>

From: Mike Rapoport <rppt@linux.ibm.com>

When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.

On arm64 it is possible that a page would be removed from the direct map
using set_direct_map_invalid_noflush() but __kernel_map_pages() will refuse
to map this page back if DEBUG_PAGEALLOC is disabled.

Explicitly use set_direct_map_{default,invalid}_noflush() for
ARCH_HAS_SET_DIRECT_MAP case and debug_pagealloc_map_pages() for
DEBUG_PAGEALLOC case.

While on that, rename kernel_map_pages() to hibernate_map_page() and drop
numpages parameter.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 kernel/power/snapshot.c | 29 +++++++++++++++++++----------
 1 file changed, 19 insertions(+), 10 deletions(-)

diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index fa499466f645..ecb7b32ce77c 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -76,16 +76,25 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
 static inline void hibernate_restore_unprotect_page(void *page_address) {}
 #endif /* CONFIG_STRICT_KERNEL_RWX  && CONFIG_ARCH_HAS_SET_MEMORY */
 
-#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable)
+static inline void hibernate_map_page(struct page *page, int enable)
 {
-	__kernel_map_pages(page, numpages, enable);
+	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
+		unsigned long addr = (unsigned long)page_address(page);
+		int ret;
+
+		if (enable)
+			ret = set_direct_map_default_noflush(page);
+		else
+			ret = set_direct_map_invalid_noflush(page);
+
+		if (WARN_ON(ret))
+			return;
+
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+	} else {
+		debug_pagealloc_map_pages(page, 1, enable);
+	}
 }
-#else
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable) {}
-#endif
 
 static int swsusp_page_is_free(struct page *);
 static void swsusp_set_page_forbidden(struct page *);
@@ -1366,9 +1375,9 @@ static void safe_copy_page(void *dst, struct page *s_page)
 	if (kernel_page_present(s_page)) {
 		do_copy_page(dst, page_address(s_page));
 	} else {
-		kernel_map_pages(s_page, 1, 1);
+		hibernate_map_page(s_page, 1);
 		do_copy_page(dst, page_address(s_page));
-		kernel_map_pages(s_page, 1, 0);
+		hibernate_map_page(s_page, 0);
 	}
 }
 
-- 
2.28.0

WARNING: multiple messages have this Message-ID (diff)
From: Mike Rapoport <rppt@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	linux-mm@kvack.org, Paul Mackerras <paulus@samba.org>,
	Pavel Machek <pavel@ucw.cz>, "H. Peter Anvin" <hpa@zytor.com>,
	sparclinux@vger.kernel.org, Christoph Lameter <cl@linux.com>,
	Will Deacon <will@kernel.org>,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	Michael Ellerman <mpe@ellerman.id.au>,
	x86@kernel.org, Mike Rapoport <rppt@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Len Brown <len.brown@intel.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Vasily Gorbik <gor@linux.ibm.com>,
	linux-pm@vger.kernel.org, Heiko Carstens <hca@linux.ibm.com>,
	David Rientjes <rientjes@google.com>,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-arm-kernel@lists.infradead.org,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, Pekka Enberg <penberg@kernel.org>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	"Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
	linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>,
	Mike Rapoport <rppt@kernel.org>
Subject: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map
Date: Sun, 25 Oct 2020 12:15:53 +0200	[thread overview]
Message-ID: <20201025101555.3057-3-rppt@kernel.org> (raw)
In-Reply-To: <20201025101555.3057-1-rppt@kernel.org>

From: Mike Rapoport <rppt@linux.ibm.com>

When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.

On arm64 it is possible that a page would be removed from the direct map
using set_direct_map_invalid_noflush() but __kernel_map_pages() will refuse
to map this page back if DEBUG_PAGEALLOC is disabled.

Explicitly use set_direct_map_{default,invalid}_noflush() for
ARCH_HAS_SET_DIRECT_MAP case and debug_pagealloc_map_pages() for
DEBUG_PAGEALLOC case.

While on that, rename kernel_map_pages() to hibernate_map_page() and drop
numpages parameter.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 kernel/power/snapshot.c | 29 +++++++++++++++++++----------
 1 file changed, 19 insertions(+), 10 deletions(-)

diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index fa499466f645..ecb7b32ce77c 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -76,16 +76,25 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
 static inline void hibernate_restore_unprotect_page(void *page_address) {}
 #endif /* CONFIG_STRICT_KERNEL_RWX  && CONFIG_ARCH_HAS_SET_MEMORY */
 
-#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable)
+static inline void hibernate_map_page(struct page *page, int enable)
 {
-	__kernel_map_pages(page, numpages, enable);
+	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
+		unsigned long addr = (unsigned long)page_address(page);
+		int ret;
+
+		if (enable)
+			ret = set_direct_map_default_noflush(page);
+		else
+			ret = set_direct_map_invalid_noflush(page);
+
+		if (WARN_ON(ret))
+			return;
+
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+	} else {
+		debug_pagealloc_map_pages(page, 1, enable);
+	}
 }
-#else
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable) {}
-#endif
 
 static int swsusp_page_is_free(struct page *);
 static void swsusp_set_page_forbidden(struct page *);
@@ -1366,9 +1375,9 @@ static void safe_copy_page(void *dst, struct page *s_page)
 	if (kernel_page_present(s_page)) {
 		do_copy_page(dst, page_address(s_page));
 	} else {
-		kernel_map_pages(s_page, 1, 1);
+		hibernate_map_page(s_page, 1);
 		do_copy_page(dst, page_address(s_page));
-		kernel_map_pages(s_page, 1, 0);
+		hibernate_map_page(s_page, 0);
 	}
 }
 
-- 
2.28.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Mike Rapoport <rppt@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	linux-mm@kvack.org, Paul Mackerras <paulus@samba.org>,
	Pavel Machek <pavel@ucw.cz>, "H. Peter Anvin" <hpa@zytor.com>,
	sparclinux@vger.kernel.org, Christoph Lameter <cl@linux.com>,
	Will Deacon <will@kernel.org>,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	x86@kernel.org, Mike Rapoport <rppt@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Len Brown <len.brown@intel.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Vasily Gorbik <gor@linux.ibm.com>,
	linux-pm@vger.kernel.org, Heiko Carstens <hca@linux.ibm.com>,
	David Rientjes <rientjes@google.com>,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-arm-kernel@lists.infradead.org,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, Pekka Enberg <penberg@kernel.org>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	"Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
	linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>,
	Mike Rapoport <rppt@kernel.org>
Subject: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map
Date: Sun, 25 Oct 2020 12:15:53 +0200	[thread overview]
Message-ID: <20201025101555.3057-3-rppt@kernel.org> (raw)
In-Reply-To: <20201025101555.3057-1-rppt@kernel.org>

From: Mike Rapoport <rppt@linux.ibm.com>

When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.

On arm64 it is possible that a page would be removed from the direct map
using set_direct_map_invalid_noflush() but __kernel_map_pages() will refuse
to map this page back if DEBUG_PAGEALLOC is disabled.

Explicitly use set_direct_map_{default,invalid}_noflush() for
ARCH_HAS_SET_DIRECT_MAP case and debug_pagealloc_map_pages() for
DEBUG_PAGEALLOC case.

While on that, rename kernel_map_pages() to hibernate_map_page() and drop
numpages parameter.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 kernel/power/snapshot.c | 29 +++++++++++++++++++----------
 1 file changed, 19 insertions(+), 10 deletions(-)

diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index fa499466f645..ecb7b32ce77c 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -76,16 +76,25 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
 static inline void hibernate_restore_unprotect_page(void *page_address) {}
 #endif /* CONFIG_STRICT_KERNEL_RWX  && CONFIG_ARCH_HAS_SET_MEMORY */
 
-#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable)
+static inline void hibernate_map_page(struct page *page, int enable)
 {
-	__kernel_map_pages(page, numpages, enable);
+	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
+		unsigned long addr = (unsigned long)page_address(page);
+		int ret;
+
+		if (enable)
+			ret = set_direct_map_default_noflush(page);
+		else
+			ret = set_direct_map_invalid_noflush(page);
+
+		if (WARN_ON(ret))
+			return;
+
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+	} else {
+		debug_pagealloc_map_pages(page, 1, enable);
+	}
 }
-#else
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable) {}
-#endif
 
 static int swsusp_page_is_free(struct page *);
 static void swsusp_set_page_forbidden(struct page *);
@@ -1366,9 +1375,9 @@ static void safe_copy_page(void *dst, struct page *s_page)
 	if (kernel_page_present(s_page)) {
 		do_copy_page(dst, page_address(s_page));
 	} else {
-		kernel_map_pages(s_page, 1, 1);
+		hibernate_map_page(s_page, 1);
 		do_copy_page(dst, page_address(s_page));
-		kernel_map_pages(s_page, 1, 0);
+		hibernate_map_page(s_page, 0);
 	}
 }
 
-- 
2.28.0


WARNING: multiple messages have this Message-ID (diff)
From: Mike Rapoport <rppt@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	linux-mm@kvack.org, Paul Mackerras <paulus@samba.org>,
	Pavel Machek <pavel@ucw.cz>, "H. Peter Anvin" <hpa@zytor.com>,
	sparclinux@vger.kernel.org, Christoph Lameter <cl@linux.com>,
	Will Deacon <will@kernel.org>,
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	Michael Ellerman <mpe@ellerman.id.au>,
	x86@kernel.org, Mike Rapoport <rppt@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Len Brown <len.brown@intel.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Vasily Gorbik <gor@linux.ibm.com>,
	linux-pm@vger.kernel.org, Heiko Carstens <hca@linux.ibm.com>,
	David Rientjes <rientjes@google.com>,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-arm-kernel@lists.infradead.org,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, Pekka Enberg <penberg@kernel.org>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	"Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
	linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>,
	Mike Rapoport <rppt@kernel.org>
Subject: [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map
Date: Sun, 25 Oct 2020 12:15:53 +0200	[thread overview]
Message-ID: <20201025101555.3057-3-rppt@kernel.org> (raw)
In-Reply-To: <20201025101555.3057-1-rppt@kernel.org>

From: Mike Rapoport <rppt@linux.ibm.com>

When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.

On arm64 it is possible that a page would be removed from the direct map
using set_direct_map_invalid_noflush() but __kernel_map_pages() will refuse
to map this page back if DEBUG_PAGEALLOC is disabled.

Explicitly use set_direct_map_{default,invalid}_noflush() for
ARCH_HAS_SET_DIRECT_MAP case and debug_pagealloc_map_pages() for
DEBUG_PAGEALLOC case.

While on that, rename kernel_map_pages() to hibernate_map_page() and drop
numpages parameter.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 kernel/power/snapshot.c | 29 +++++++++++++++++++----------
 1 file changed, 19 insertions(+), 10 deletions(-)

diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index fa499466f645..ecb7b32ce77c 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -76,16 +76,25 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
 static inline void hibernate_restore_unprotect_page(void *page_address) {}
 #endif /* CONFIG_STRICT_KERNEL_RWX  && CONFIG_ARCH_HAS_SET_MEMORY */
 
-#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable)
+static inline void hibernate_map_page(struct page *page, int enable)
 {
-	__kernel_map_pages(page, numpages, enable);
+	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
+		unsigned long addr = (unsigned long)page_address(page);
+		int ret;
+
+		if (enable)
+			ret = set_direct_map_default_noflush(page);
+		else
+			ret = set_direct_map_invalid_noflush(page);
+
+		if (WARN_ON(ret))
+			return;
+
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+	} else {
+		debug_pagealloc_map_pages(page, 1, enable);
+	}
 }
-#else
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable) {}
-#endif
 
 static int swsusp_page_is_free(struct page *);
 static void swsusp_set_page_forbidden(struct page *);
@@ -1366,9 +1375,9 @@ static void safe_copy_page(void *dst, struct page *s_page)
 	if (kernel_page_present(s_page)) {
 		do_copy_page(dst, page_address(s_page));
 	} else {
-		kernel_map_pages(s_page, 1, 1);
+		hibernate_map_page(s_page, 1);
 		do_copy_page(dst, page_address(s_page));
-		kernel_map_pages(s_page, 1, 0);
+		hibernate_map_page(s_page, 0);
 	}
 }
 
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2020-10-25 10:16 UTC|newest]

Thread overview: 219+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-25 10:15 [PATCH 0/4] arch, mm: improve robustness of direct map manipulation Mike Rapoport
2020-10-25 10:15 ` Mike Rapoport
2020-10-25 10:15 ` Mike Rapoport
2020-10-25 10:15 ` Mike Rapoport
2020-10-25 10:15 ` Mike Rapoport
2020-10-25 10:15 ` [PATCH 1/4] mm: introduce debug_pagealloc_map_pages() helper Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-26 11:05   ` David Hildenbrand
2020-10-26 11:05     ` David Hildenbrand
2020-10-26 11:05     ` David Hildenbrand
2020-10-26 11:05     ` David Hildenbrand
2020-10-26 11:05     ` David Hildenbrand
2020-10-26 11:54     ` Mike Rapoport
2020-10-26 11:54       ` Mike Rapoport
2020-10-26 11:54       ` Mike Rapoport
2020-10-26 11:54       ` Mike Rapoport
2020-10-26 11:54       ` Mike Rapoport
2020-10-26 11:55       ` David Hildenbrand
2020-10-26 11:55         ` David Hildenbrand
2020-10-26 11:55         ` David Hildenbrand
2020-10-26 11:55         ` David Hildenbrand
2020-10-26 11:55         ` David Hildenbrand
2020-10-25 10:15 ` Mike Rapoport [this message]
2020-10-25 10:15   ` [PATCH 2/4] PM: hibernate: improve robustness of mapping pages in the direct map Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-26  0:38   ` Edgecombe, Rick P
2020-10-26  0:38     ` Edgecombe, Rick P
2020-10-26  0:38     ` Edgecombe, Rick P
2020-10-26  0:38     ` Edgecombe, Rick P
2020-10-26  0:38     ` Edgecombe, Rick P
2020-10-26  0:38     ` Edgecombe, Rick P
2020-10-26  9:15     ` Mike Rapoport
2020-10-26  9:15       ` Mike Rapoport
2020-10-26  9:15       ` Mike Rapoport
2020-10-26  9:15       ` Mike Rapoport
2020-10-26  9:15       ` Mike Rapoport
2020-10-26  9:15       ` Mike Rapoport
2020-10-26 18:57       ` Edgecombe, Rick P
2020-10-26 18:57         ` Edgecombe, Rick P
2020-10-26 18:57         ` Edgecombe, Rick P
2020-10-26 18:57         ` Edgecombe, Rick P
2020-10-26 18:57         ` Edgecombe, Rick P
2020-10-26 18:57         ` Edgecombe, Rick P
2020-10-27  8:49         ` Mike Rapoport
2020-10-27  8:49           ` Mike Rapoport
2020-10-27  8:49           ` Mike Rapoport
2020-10-27  8:49           ` Mike Rapoport
2020-10-27  8:49           ` Mike Rapoport
2020-10-27  8:49           ` Mike Rapoport
2020-10-27 22:44           ` Edgecombe, Rick P
2020-10-27 22:44             ` Edgecombe, Rick P
2020-10-27 22:44             ` Edgecombe, Rick P
2020-10-27 22:44             ` Edgecombe, Rick P
2020-10-27 22:44             ` Edgecombe, Rick P
2020-10-27 22:44             ` Edgecombe, Rick P
2020-10-28  9:41             ` Mike Rapoport
2020-10-28  9:41               ` Mike Rapoport
2020-10-28  9:41               ` Mike Rapoport
2020-10-28  9:41               ` Mike Rapoport
2020-10-28  9:41               ` Mike Rapoport
2020-10-28  9:41               ` Mike Rapoport
2020-10-27  1:10       ` Edgecombe, Rick P
2020-10-27  1:10         ` Edgecombe, Rick P
2020-10-27  1:10         ` Edgecombe, Rick P
2020-10-27  1:10         ` Edgecombe, Rick P
2020-10-27  1:10         ` Edgecombe, Rick P
2020-10-27  1:10         ` Edgecombe, Rick P
2020-10-28 21:15   ` Edgecombe, Rick P
2020-10-28 21:15     ` Edgecombe, Rick P
2020-10-28 21:15     ` Edgecombe, Rick P
2020-10-28 21:15     ` Edgecombe, Rick P
2020-10-28 21:15     ` Edgecombe, Rick P
2020-10-28 21:15     ` Edgecombe, Rick P
2020-10-29  7:54     ` Mike Rapoport
2020-10-29  7:54       ` Mike Rapoport
2020-10-29  7:54       ` Mike Rapoport
2020-10-29  7:54       ` Mike Rapoport
2020-10-29  7:54       ` Mike Rapoport
2020-10-29  7:54       ` Mike Rapoport
2020-10-29 23:19       ` Edgecombe, Rick P
2020-10-29 23:19         ` Edgecombe, Rick P
2020-10-29 23:19         ` Edgecombe, Rick P
2020-10-29 23:19         ` Edgecombe, Rick P
2020-10-29 23:19         ` Edgecombe, Rick P
2020-10-29 23:19         ` Edgecombe, Rick P
2020-11-01 17:02         ` Mike Rapoport
2020-11-01 17:02           ` Mike Rapoport
2020-11-01 17:02           ` Mike Rapoport
2020-11-01 17:02           ` Mike Rapoport
2020-11-01 17:02           ` Mike Rapoport
2020-11-01 17:02           ` Mike Rapoport
2020-10-25 10:15 ` [PATCH 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15 ` [PATCH 4/4] arch, mm: make kernel_page_present() always available Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-25 10:15   ` Mike Rapoport
2020-10-26  0:54   ` Edgecombe, Rick P
2020-10-26  0:54     ` Edgecombe, Rick P
2020-10-26  0:54     ` Edgecombe, Rick P
2020-10-26  0:54     ` Edgecombe, Rick P
2020-10-26  0:54     ` Edgecombe, Rick P
2020-10-26  0:54     ` Edgecombe, Rick P
2020-10-26  9:31     ` Mike Rapoport
2020-10-26  9:31       ` Mike Rapoport
2020-10-26  9:31       ` Mike Rapoport
2020-10-26  9:31       ` Mike Rapoport
2020-10-26  9:31       ` Mike Rapoport
2020-10-26  9:31       ` Mike Rapoport
2020-10-26  1:13 ` [PATCH 0/4] arch, mm: improve robustness of direct map manipulation Edgecombe, Rick P
2020-10-26  1:13   ` Edgecombe, Rick P
2020-10-26  1:13   ` Edgecombe, Rick P
2020-10-26  1:13   ` Edgecombe, Rick P
2020-10-26  1:13   ` Edgecombe, Rick P
2020-10-26  1:13   ` Edgecombe, Rick P
2020-10-26  9:05   ` Mike Rapoport
2020-10-26  9:05     ` Mike Rapoport
2020-10-26  9:05     ` Mike Rapoport
2020-10-26  9:05     ` Mike Rapoport
2020-10-26  9:05     ` Mike Rapoport
2020-10-26  9:05     ` Mike Rapoport
2020-10-26 18:05     ` Edgecombe, Rick P
2020-10-26 18:05       ` Edgecombe, Rick P
2020-10-26 18:05       ` Edgecombe, Rick P
2020-10-26 18:05       ` Edgecombe, Rick P
2020-10-26 18:05       ` Edgecombe, Rick P
2020-10-26 18:05       ` Edgecombe, Rick P
2020-10-27  8:38       ` Mike Rapoport
2020-10-27  8:38         ` Mike Rapoport
2020-10-27  8:38         ` Mike Rapoport
2020-10-27  8:38         ` Mike Rapoport
2020-10-27  8:38         ` Mike Rapoport
2020-10-27  8:38         ` Mike Rapoport
2020-10-27  8:46         ` David Hildenbrand
2020-10-27  8:46           ` David Hildenbrand
2020-10-27  8:46           ` David Hildenbrand
2020-10-27  8:46           ` David Hildenbrand
2020-10-27  8:46           ` David Hildenbrand
2020-10-27  8:46           ` David Hildenbrand
2020-10-27  9:47           ` Mike Rapoport
2020-10-27  9:47             ` Mike Rapoport
2020-10-27  9:47             ` Mike Rapoport
2020-10-27  9:47             ` Mike Rapoport
2020-10-27  9:47             ` Mike Rapoport
2020-10-27  9:47             ` Mike Rapoport
2020-10-27 10:34             ` David Hildenbrand
2020-10-27 10:34               ` David Hildenbrand
2020-10-27 10:34               ` David Hildenbrand
2020-10-27 10:34               ` David Hildenbrand
2020-10-27 10:34               ` David Hildenbrand
2020-10-27 10:34               ` David Hildenbrand
2020-10-28 11:09           ` Mike Rapoport
2020-10-28 11:09             ` Mike Rapoport
2020-10-28 11:09             ` Mike Rapoport
2020-10-28 11:09             ` Mike Rapoport
2020-10-28 11:09             ` Mike Rapoport
2020-10-28 11:09             ` Mike Rapoport
2020-10-28 11:17             ` David Hildenbrand
2020-10-28 11:17               ` David Hildenbrand
2020-10-28 11:17               ` David Hildenbrand
2020-10-28 11:17               ` David Hildenbrand
2020-10-28 11:17               ` David Hildenbrand
2020-10-28 11:17               ` David Hildenbrand
2020-10-28 12:22               ` Mike Rapoport
2020-10-28 12:22                 ` Mike Rapoport
2020-10-28 12:22                 ` Mike Rapoport
2020-10-28 12:22                 ` Mike Rapoport
2020-10-28 12:22                 ` Mike Rapoport
2020-10-28 12:22                 ` Mike Rapoport
2020-10-28 18:31             ` Edgecombe, Rick P
2020-10-28 18:31               ` Edgecombe, Rick P
2020-10-28 18:31               ` Edgecombe, Rick P
2020-10-28 18:31               ` Edgecombe, Rick P
2020-10-28 18:31               ` Edgecombe, Rick P
2020-10-28 18:31               ` Edgecombe, Rick P
2020-10-28 11:20         ` Will Deacon
2020-10-28 11:20           ` Will Deacon
2020-10-28 11:20           ` Will Deacon
2020-10-28 11:20           ` Will Deacon
2020-10-28 11:20           ` Will Deacon
2020-10-28 11:20           ` Will Deacon
2020-10-28 11:30           ` Mike Rapoport
2020-10-28 11:30             ` Mike Rapoport
2020-10-28 11:30             ` Mike Rapoport
2020-10-28 11:30             ` Mike Rapoport
2020-10-28 11:30             ` Mike Rapoport
2020-10-28 11:30             ` Mike Rapoport
2020-10-28 21:03             ` Edgecombe, Rick P
2020-10-28 21:03               ` Edgecombe, Rick P
2020-10-28 21:03               ` Edgecombe, Rick P
2020-10-28 21:03               ` Edgecombe, Rick P
2020-10-28 21:03               ` Edgecombe, Rick P
2020-10-28 21:03               ` Edgecombe, Rick P
2020-10-29  8:12               ` Mike Rapoport
2020-10-29  8:12                 ` Mike Rapoport
2020-10-29  8:12                 ` Mike Rapoport
2020-10-29  8:12                 ` Mike Rapoport
2020-10-29  8:12                 ` Mike Rapoport
2020-10-29  8:12                 ` Mike Rapoport
2020-10-29 23:19                 ` Edgecombe, Rick P
2020-10-29 23:19                   ` Edgecombe, Rick P
2020-10-29 23:19                   ` Edgecombe, Rick P
2020-10-29 23:19                   ` Edgecombe, Rick P
2020-10-29 23:19                   ` Edgecombe, Rick P
2020-10-29 23:19                   ` Edgecombe, Rick P
2020-10-29  8:15 ` David Hildenbrand
2020-10-29  8:15   ` David Hildenbrand
2020-10-29  8:15   ` David Hildenbrand
2020-10-29  8:15   ` David Hildenbrand
2020-10-29  8:15   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201025101555.3057-3-rppt@kernel.org \
    --to=rppt@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=benh@kernel.crashing.org \
    --cc=borntraeger@de.ibm.com \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=davem@davemloft.net \
    --cc=david@redhat.com \
    --cc=gor@linux.ibm.com \
    --cc=hca@linux.ibm.com \
    --cc=hpa@zytor.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=kirill@shutemov.name \
    --cc=len.brown@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=mpe@ellerman.id.au \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=paulus@samba.org \
    --cc=pavel@ucw.cz \
    --cc=penberg@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rick.p.edgecombe@intel.com \
    --cc=rientjes@google.com \
    --cc=rjw@rjwysocki.net \
    --cc=rppt@linux.ibm.com \
    --cc=sparclinux@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.