All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] Fix type confusion in page_table_check
@ 2023-05-15 13:09 Ruihan Li
  2023-05-15 13:09 ` [PATCH v2 1/4] usb: usbfs: Enforce page requirements for mmap Ruihan Li
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Ruihan Li @ 2023-05-15 13:09 UTC (permalink / raw)
  To: linux-mm, linux-usb
  Cc: linux-kernel, Pasha Tatashin, David Hildenbrand, Matthew Wilcox,
	Andrew Morton, Christoph Hellwig, Alan Stern, Greg Kroah-Hartman,
	Ruihan Li

Recently, syzbot reported [1] ("kernel BUG in page_table_check_clear").
The root cause is that usbdev_mmap calls remap_pfn_range on kmalloc'ed
memory, which leads to type confusion between struct page and slab in
page_table_check. This series of patches fixes the usb side by avoiding
mapping slab pages into userspace, and fixes the mm side by enforcing
that all user-accessible pages are not slab pages. A more detailed
analysis and some discussion of how to fix the problem can also be found
in [1].

 [1] https://lore.kernel.org/lkml/20230507135844.1231056-1-lrh2000@pku.edu.cn/T/

Changes since v1:
  * Fix inconsistent coding styles. (Alan Stern)
  * Relax !DEVMEM requirements to EXCLUSIVE_SYSTEM_RAM, which is
    equivalent to !DEVMEM || STRICT_DEVMEM. (David Hildenbrand)
  * A few random tweaks in commit messages and code comments, none of
    them major.
Link to v1:
  https://lore.kernel.org/lkml/20230510085527.57953-1-lrh2000@pku.edu.cn/T/

Cc: Matthew Wilcox <willy@infradead.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Ruihan Li (4):
  usb: usbfs: Enforce page requirements for mmap
  usb: usbfs: Use consistent mmap functions
  mm: page_table_check: Make it dependent on EXCLUSIVE_SYSTEM_RAM
  mm: page_table_check: Ensure user pages are not slab pages

 Documentation/mm/page_table_check.rst | 18 ++++++++++++
 drivers/usb/core/buffer.c             | 41 +++++++++++++++++++++++++++
 drivers/usb/core/devio.c              | 20 +++++++++----
 include/linux/page-flags.h            |  6 ++++
 include/linux/usb/hcd.h               |  5 ++++
 mm/Kconfig.debug                      |  2 +-
 mm/page_table_check.c                 |  6 ++++
 7 files changed, 91 insertions(+), 7 deletions(-)

-- 
2.40.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2 1/4] usb: usbfs: Enforce page requirements for mmap
  2023-05-15 13:09 [PATCH v2 0/4] Fix type confusion in page_table_check Ruihan Li
@ 2023-05-15 13:09 ` Ruihan Li
  2023-05-15 14:07   ` Alan Stern
  2023-05-17  6:22   ` Christoph Hellwig
  2023-05-15 13:09 ` [PATCH v2 2/4] usb: usbfs: Use consistent mmap functions Ruihan Li
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 16+ messages in thread
From: Ruihan Li @ 2023-05-15 13:09 UTC (permalink / raw)
  To: linux-mm, linux-usb
  Cc: linux-kernel, Pasha Tatashin, David Hildenbrand, Matthew Wilcox,
	Andrew Morton, Christoph Hellwig, Alan Stern, Greg Kroah-Hartman,
	Ruihan Li, syzbot+fcf1a817ceb50935ce99, stable

The current implementation of usbdev_mmap uses usb_alloc_coherent to
allocate memory pages that will later be mapped into the user space.
Meanwhile, usb_alloc_coherent employs three different methods to
allocate memory, as outlined below:
 * If hcd->localmem_pool is non-null, it uses gen_pool_dma_alloc to
   allocate memory;
 * If DMA is not available, it uses kmalloc to allocate memory;
 * Otherwise, it uses dma_alloc_coherent.

However, it should be noted that gen_pool_dma_alloc does not guarantee
that the resulting memory will be page-aligned. Furthermore, trying to
map slab pages (i.e., memory allocated by kmalloc) into the user space
is not resonable and can lead to problems, such as a type confusion bug
when PAGE_TABLE_CHECK=y [1].

To address these issues, this patch introduces hcd_alloc_coherent_pages,
which addresses the above two problems. Specifically,
hcd_alloc_coherent_pages uses gen_pool_dma_alloc_align instead of
gen_pool_dma_alloc to ensure that the memory is page-aligned. To replace
kmalloc, hcd_alloc_coherent_pages directly allocates pages by calling
__get_free_pages.

Reported-by: syzbot+fcf1a817ceb50935ce99@syzkaller.appspotmail.comm
Closes: https://lore.kernel.org/lkml/000000000000258e5e05fae79fc1@google.com/ [1]
Fixes: f7d34b445abc ("USB: Add support for usbfs zerocopy.")
Fixes: ff2437befd8f ("usb: host: Fix excessive alignment restriction for local memory allocations")
Cc: stable@vger.kernel.org
Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>
---
 drivers/usb/core/buffer.c | 41 +++++++++++++++++++++++++++++++++++++++
 drivers/usb/core/devio.c  |  9 +++++----
 include/linux/usb/hcd.h   |  5 +++++
 3 files changed, 51 insertions(+), 4 deletions(-)

diff --git a/drivers/usb/core/buffer.c b/drivers/usb/core/buffer.c
index fbb087b72..268ccbec8 100644
--- a/drivers/usb/core/buffer.c
+++ b/drivers/usb/core/buffer.c
@@ -172,3 +172,44 @@ void hcd_buffer_free(
 	}
 	dma_free_coherent(hcd->self.sysdev, size, addr, dma);
 }
+
+void *hcd_buffer_alloc_pages(struct usb_hcd *hcd,
+		size_t size, gfp_t mem_flags, dma_addr_t *dma)
+{
+	if (size == 0)
+		return NULL;
+
+	if (hcd->localmem_pool)
+		return gen_pool_dma_alloc_align(hcd->localmem_pool,
+				size, dma, PAGE_SIZE);
+
+	/* some USB hosts just use PIO */
+	if (!hcd_uses_dma(hcd)) {
+		*dma = DMA_MAPPING_ERROR;
+		return (void *)__get_free_pages(mem_flags,
+				get_order(size));
+	}
+
+	return dma_alloc_coherent(hcd->self.sysdev,
+			size, dma, mem_flags);
+}
+
+void hcd_buffer_free_pages(struct usb_hcd *hcd,
+		size_t size, void *addr, dma_addr_t dma)
+{
+	if (!addr)
+		return;
+
+	if (hcd->localmem_pool) {
+		gen_pool_free(hcd->localmem_pool,
+				(unsigned long)addr, size);
+		return;
+	}
+
+	if (!hcd_uses_dma(hcd)) {
+		free_pages((unsigned long)addr, get_order(size));
+		return;
+	}
+
+	dma_free_coherent(hcd->self.sysdev, size, addr, dma);
+}
diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
index e501a03d6..3936ca7f7 100644
--- a/drivers/usb/core/devio.c
+++ b/drivers/usb/core/devio.c
@@ -186,6 +186,7 @@ static int connected(struct usb_dev_state *ps)
 static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count)
 {
 	struct usb_dev_state *ps = usbm->ps;
+	struct usb_hcd *hcd = bus_to_hcd(ps->dev->bus);
 	unsigned long flags;
 
 	spin_lock_irqsave(&ps->lock, flags);
@@ -194,8 +195,8 @@ static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count)
 		list_del(&usbm->memlist);
 		spin_unlock_irqrestore(&ps->lock, flags);
 
-		usb_free_coherent(ps->dev, usbm->size, usbm->mem,
-				usbm->dma_handle);
+		hcd_buffer_free_pages(hcd, usbm->size,
+				usbm->mem, usbm->dma_handle);
 		usbfs_decrease_memory_usage(
 			usbm->size + sizeof(struct usb_memory));
 		kfree(usbm);
@@ -247,8 +248,8 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
 		goto error_decrease_mem;
 	}
 
-	mem = usb_alloc_coherent(ps->dev, size, GFP_USER | __GFP_NOWARN,
-			&dma_handle);
+	mem = hcd_buffer_alloc_pages(hcd,
+			size, GFP_USER | __GFP_NOWARN, &dma_handle);
 	if (!mem) {
 		ret = -ENOMEM;
 		goto error_free_usbm;
diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
index 094c77eaf..0c7eff91a 100644
--- a/include/linux/usb/hcd.h
+++ b/include/linux/usb/hcd.h
@@ -501,6 +501,11 @@ void *hcd_buffer_alloc(struct usb_bus *bus, size_t size,
 void hcd_buffer_free(struct usb_bus *bus, size_t size,
 	void *addr, dma_addr_t dma);
 
+void *hcd_buffer_alloc_pages(struct usb_hcd *hcd,
+		size_t size, gfp_t mem_flags, dma_addr_t *dma);
+void hcd_buffer_free_pages(struct usb_hcd *hcd,
+		size_t size, void *addr, dma_addr_t dma);
+
 /* generic bus glue, needed for host controllers that don't use PCI */
 extern irqreturn_t usb_hcd_irq(int irq, void *__hcd);
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 2/4] usb: usbfs: Use consistent mmap functions
  2023-05-15 13:09 [PATCH v2 0/4] Fix type confusion in page_table_check Ruihan Li
  2023-05-15 13:09 ` [PATCH v2 1/4] usb: usbfs: Enforce page requirements for mmap Ruihan Li
@ 2023-05-15 13:09 ` Ruihan Li
  2023-05-15 16:07   ` David Laight
  2023-05-15 13:09 ` [PATCH v2 3/4] mm: page_table_check: Make it dependent on EXCLUSIVE_SYSTEM_RAM Ruihan Li
  2023-05-15 13:09 ` [PATCH v2 4/4] mm: page_table_check: Ensure user pages are not slab pages Ruihan Li
  3 siblings, 1 reply; 16+ messages in thread
From: Ruihan Li @ 2023-05-15 13:09 UTC (permalink / raw)
  To: linux-mm, linux-usb
  Cc: linux-kernel, Pasha Tatashin, David Hildenbrand, Matthew Wilcox,
	Andrew Morton, Christoph Hellwig, Alan Stern, Greg Kroah-Hartman,
	Ruihan Li, stable

When hcd->localmem_pool is non-null, localmem_pool is used to allocate
DMA memory. In this case, the dma address will be properly returned (in
dma_handle), and dma_mmap_coherent should be used to map this memory
into the user space. However, the current implementation uses
pfn_remap_range, which is supposed to map normal pages.

Instead of repeating the logic in the memory allocation function, this
patch introduces a more robust solution. Here, the type of allocated
memory is checked by testing whether dma_handle is properly set. If
dma_handle is properly returned, it means some DMA pages are allocated
and dma_mmap_coherent should be used to map them. Otherwise, normal
pages are allocated and pfn_remap_range should be called. This ensures
that the correct mmap functions are used consistently, independently
with logic details that determine which type of memory gets allocated.

Fixes: a0e710a7def4 ("USB: usbfs: fix mmap dma mismatch")
Cc: stable@vger.kernel.org
Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>
---
 drivers/usb/core/devio.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
index 3936ca7f7..fcf68818e 100644
--- a/drivers/usb/core/devio.c
+++ b/drivers/usb/core/devio.c
@@ -235,7 +235,7 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
 	size_t size = vma->vm_end - vma->vm_start;
 	void *mem;
 	unsigned long flags;
-	dma_addr_t dma_handle;
+	dma_addr_t dma_handle = DMA_MAPPING_ERROR;
 	int ret;
 
 	ret = usbfs_increase_memory_usage(size + sizeof(struct usb_memory));
@@ -265,7 +265,14 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
 	usbm->vma_use_count = 1;
 	INIT_LIST_HEAD(&usbm->memlist);
 
-	if (hcd->localmem_pool || !hcd_uses_dma(hcd)) {
+	/*
+	 * In DMA-unavailable cases, hcd_buffer_alloc_pages allocates
+	 * normal pages and assigns DMA_MAPPING_ERROR to dma_handle. Check
+	 * whether we are in such cases, and then use remap_pfn_range (or
+	 * dma_mmap_coherent) to map normal (or DMA) pages into the user
+	 * space, respectively.
+	 */
+	if (dma_handle == DMA_MAPPING_ERROR) {
 		if (remap_pfn_range(vma, vma->vm_start,
 				    virt_to_phys(usbm->mem) >> PAGE_SHIFT,
 				    size, vma->vm_page_prot) < 0) {
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 3/4] mm: page_table_check: Make it dependent on EXCLUSIVE_SYSTEM_RAM
  2023-05-15 13:09 [PATCH v2 0/4] Fix type confusion in page_table_check Ruihan Li
  2023-05-15 13:09 ` [PATCH v2 1/4] usb: usbfs: Enforce page requirements for mmap Ruihan Li
  2023-05-15 13:09 ` [PATCH v2 2/4] usb: usbfs: Use consistent mmap functions Ruihan Li
@ 2023-05-15 13:09 ` Ruihan Li
  2023-05-15 16:36   ` Pasha Tatashin
  2023-05-16 12:55   ` David Hildenbrand
  2023-05-15 13:09 ` [PATCH v2 4/4] mm: page_table_check: Ensure user pages are not slab pages Ruihan Li
  3 siblings, 2 replies; 16+ messages in thread
From: Ruihan Li @ 2023-05-15 13:09 UTC (permalink / raw)
  To: linux-mm, linux-usb
  Cc: linux-kernel, Pasha Tatashin, David Hildenbrand, Matthew Wilcox,
	Andrew Morton, Christoph Hellwig, Alan Stern, Greg Kroah-Hartman,
	Ruihan Li, stable

Without EXCLUSIVE_SYSTEM_RAM, users are allowed to map arbitrary
physical memory regions into the userspace via /dev/mem. At the same
time, pages may change their properties (e.g., from anonymous pages to
named pages) while they are still being mapped in the userspace, leading
to "corruption" detected by the page table check.

To avoid these false positives, this patch makes PAGE_TABLE_CHECK
depends on EXCLUSIVE_SYSTEM_RAM. This dependency is understandable
because PAGE_TABLE_CHECK is a hardening technique but /dev/mem without
STRICT_DEVMEM (i.e., !EXCLUSIVE_SYSTEM_RAM) is itself a security
problem.

Even with EXCLUSIVE_SYSTEM_RAM, I/O pages may be still allowed to be
mapped via /dev/mem. However, these pages are always considered as named
pages, so they won't break the logic used in the page table check.

Cc: <stable@vger.kernel.org> # 5.17
Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>
---
 Documentation/mm/page_table_check.rst | 19 +++++++++++++++++++
 mm/Kconfig.debug                      |  1 +
 2 files changed, 20 insertions(+)

diff --git a/Documentation/mm/page_table_check.rst b/Documentation/mm/page_table_check.rst
index cfd8f4117..c12838ce6 100644
--- a/Documentation/mm/page_table_check.rst
+++ b/Documentation/mm/page_table_check.rst
@@ -52,3 +52,22 @@ Build kernel with:
 
 Optionally, build kernel with PAGE_TABLE_CHECK_ENFORCED in order to have page
 table support without extra kernel parameter.
+
+Implementation notes
+====================
+
+We specifically decided not to use VMA information in order to avoid relying on
+MM states (except for limited "struct page" info). The page table check is a
+separate from Linux-MM state machine that verifies that the user accessible
+pages are not falsely shared.
+
+PAGE_TABLE_CHECK depends on EXCLUSIVE_SYSTEM_RAM. The reason is that without
+EXCLUSIVE_SYSTEM_RAM, users are allowed to map arbitrary physical memory
+regions into the userspace via /dev/mem. At the same time, pages may change
+their properties (e.g., from anonymous pages to named pages) while they are
+still being mapped in the userspace, leading to "corruption" detected by the
+page table check.
+
+Even with EXCLUSIVE_SYSTEM_RAM, I/O pages may be still allowed to be mapped via
+/dev/mem. However, these pages are always considered as named pages, so they
+won't break the logic used in the page table check.
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index a925415b4..018a5bd2f 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -98,6 +98,7 @@ config PAGE_OWNER
 config PAGE_TABLE_CHECK
 	bool "Check for invalid mappings in user page tables"
 	depends on ARCH_SUPPORTS_PAGE_TABLE_CHECK
+	depends on EXCLUSIVE_SYSTEM_RAM
 	select PAGE_EXTENSION
 	help
 	  Check that anonymous page is not being mapped twice with read write
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 4/4] mm: page_table_check: Ensure user pages are not slab pages
  2023-05-15 13:09 [PATCH v2 0/4] Fix type confusion in page_table_check Ruihan Li
                   ` (2 preceding siblings ...)
  2023-05-15 13:09 ` [PATCH v2 3/4] mm: page_table_check: Make it dependent on EXCLUSIVE_SYSTEM_RAM Ruihan Li
@ 2023-05-15 13:09 ` Ruihan Li
  2023-05-15 16:28   ` Pasha Tatashin
  3 siblings, 1 reply; 16+ messages in thread
From: Ruihan Li @ 2023-05-15 13:09 UTC (permalink / raw)
  To: linux-mm, linux-usb
  Cc: linux-kernel, Pasha Tatashin, David Hildenbrand, Matthew Wilcox,
	Andrew Morton, Christoph Hellwig, Alan Stern, Greg Kroah-Hartman,
	Ruihan Li, syzbot+fcf1a817ceb50935ce99, stable

The current uses of PageAnon in page table check functions can lead to
type confusion bugs between struct page and slab [1], if slab pages are
accidentally mapped into the user space. This is because slab reuses the
bits in struct page to store its internal states, which renders PageAnon
ineffective on slab pages.

Since slab pages are not expected to be mapped into the user space, this
patch adds BUG_ON(PageSlab(page)) checks to make sure that slab pages
are not inadvertently mapped. Otherwise, there must be some bugs in the
kernel.

Reported-by: syzbot+fcf1a817ceb50935ce99@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/lkml/000000000000258e5e05fae79fc1@google.com/ [1]
Fixes: df4e817b7108 ("mm: page table check")
Cc: <stable@vger.kernel.org> # 5.17
Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>
---
 include/linux/page-flags.h | 6 ++++++
 mm/page_table_check.c      | 6 ++++++
 2 files changed, 12 insertions(+)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 1c68d67b8..92a2063a0 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -617,6 +617,12 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
  * Please note that, confusingly, "page_mapping" refers to the inode
  * address_space which maps the page from disk; whereas "page_mapped"
  * refers to user virtual address space into which the page is mapped.
+ *
+ * For slab pages, since slab reuses the bits in struct page to store its
+ * internal states, the page->mapping does not exist as such, nor do these
+ * flags below.  So in order to avoid testing non-existent bits, please
+ * make sure that PageSlab(page) actually evaluates to false before calling
+ * the following functions (e.g., PageAnon).  See mm/slab.h.
  */
 #define PAGE_MAPPING_ANON	0x1
 #define PAGE_MAPPING_MOVABLE	0x2
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index 25d8610c0..f2baf97d5 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -71,6 +71,8 @@ static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
 
 	page = pfn_to_page(pfn);
 	page_ext = page_ext_get(page);
+
+	BUG_ON(PageSlab(page));
 	anon = PageAnon(page);
 
 	for (i = 0; i < pgcnt; i++) {
@@ -107,6 +109,8 @@ static void page_table_check_set(struct mm_struct *mm, unsigned long addr,
 
 	page = pfn_to_page(pfn);
 	page_ext = page_ext_get(page);
+
+	BUG_ON(PageSlab(page));
 	anon = PageAnon(page);
 
 	for (i = 0; i < pgcnt; i++) {
@@ -133,6 +137,8 @@ void __page_table_check_zero(struct page *page, unsigned int order)
 	struct page_ext *page_ext;
 	unsigned long i;
 
+	BUG_ON(PageSlab(page));
+
 	page_ext = page_ext_get(page);
 	BUG_ON(!page_ext);
 	for (i = 0; i < (1ul << order); i++) {
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 1/4] usb: usbfs: Enforce page requirements for mmap
  2023-05-15 13:09 ` [PATCH v2 1/4] usb: usbfs: Enforce page requirements for mmap Ruihan Li
@ 2023-05-15 14:07   ` Alan Stern
  2023-05-17  6:22   ` Christoph Hellwig
  1 sibling, 0 replies; 16+ messages in thread
From: Alan Stern @ 2023-05-15 14:07 UTC (permalink / raw)
  To: Ruihan Li
  Cc: linux-mm, linux-usb, linux-kernel, Pasha Tatashin,
	David Hildenbrand, Matthew Wilcox, Andrew Morton,
	Christoph Hellwig, Greg Kroah-Hartman,
	syzbot+fcf1a817ceb50935ce99, stable

On Mon, May 15, 2023 at 09:09:55PM +0800, Ruihan Li wrote:
> The current implementation of usbdev_mmap uses usb_alloc_coherent to
> allocate memory pages that will later be mapped into the user space.
> Meanwhile, usb_alloc_coherent employs three different methods to
> allocate memory, as outlined below:
>  * If hcd->localmem_pool is non-null, it uses gen_pool_dma_alloc to
>    allocate memory;
>  * If DMA is not available, it uses kmalloc to allocate memory;
>  * Otherwise, it uses dma_alloc_coherent.
> 
> However, it should be noted that gen_pool_dma_alloc does not guarantee
> that the resulting memory will be page-aligned. Furthermore, trying to
> map slab pages (i.e., memory allocated by kmalloc) into the user space
> is not resonable and can lead to problems, such as a type confusion bug
> when PAGE_TABLE_CHECK=y [1].
> 
> To address these issues, this patch introduces hcd_alloc_coherent_pages,
> which addresses the above two problems. Specifically,
> hcd_alloc_coherent_pages uses gen_pool_dma_alloc_align instead of
> gen_pool_dma_alloc to ensure that the memory is page-aligned. To replace
> kmalloc, hcd_alloc_coherent_pages directly allocates pages by calling
> __get_free_pages.
> 
> Reported-by: syzbot+fcf1a817ceb50935ce99@syzkaller.appspotmail.comm
> Closes: https://lore.kernel.org/lkml/000000000000258e5e05fae79fc1@google.com/ [1]
> Fixes: f7d34b445abc ("USB: Add support for usbfs zerocopy.")
> Fixes: ff2437befd8f ("usb: host: Fix excessive alignment restriction for local memory allocations")
> Cc: stable@vger.kernel.org
> Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>
> ---

For parts 1/4 and 2/4:

Acked-by: Alan Stern <stern@rowland.harvard.edu>

Alan Stern

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v2 2/4] usb: usbfs: Use consistent mmap functions
  2023-05-15 13:09 ` [PATCH v2 2/4] usb: usbfs: Use consistent mmap functions Ruihan Li
@ 2023-05-15 16:07   ` David Laight
  2023-05-16 11:42     ` Ruihan Li
  0 siblings, 1 reply; 16+ messages in thread
From: David Laight @ 2023-05-15 16:07 UTC (permalink / raw)
  To: 'Ruihan Li', linux-mm, linux-usb
  Cc: linux-kernel, Pasha Tatashin, David Hildenbrand, Matthew Wilcox,
	Andrew Morton, Christoph Hellwig, Alan Stern, Greg Kroah-Hartman,
	stable

From: Ruihan Li
> Sent: 15 May 2023 14:10
> 
> When hcd->localmem_pool is non-null, localmem_pool is used to allocate
> DMA memory. In this case, the dma address will be properly returned (in
> dma_handle), and dma_mmap_coherent should be used to map this memory
> into the user space. However, the current implementation uses
> pfn_remap_range, which is supposed to map normal pages.

I've an (out of tree) driver that does the same.
Am I right in thinking that this does still work?

I can't change the driver to use dma_map_coherent() because it
doesn't let me mmap from a page offset within a 16k allocation.

In this case the memory area is an 8MB shared transfer area to an
FPGA PCIe target sparsely filled with 16kB allocation (max 512 allocs).
The discontinuous physical memory blocks appear as logically
contiguous to both the FPGA logic and when mapped to userspace.
(But not to driver code.)

I don't really want to expose the 16k allocation size to userspace.
If we need more than 8MB then the allocation size would need
changing.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/4] mm: page_table_check: Ensure user pages are not slab pages
  2023-05-15 13:09 ` [PATCH v2 4/4] mm: page_table_check: Ensure user pages are not slab pages Ruihan Li
@ 2023-05-15 16:28   ` Pasha Tatashin
  2023-05-16 11:51     ` Ruihan Li
  0 siblings, 1 reply; 16+ messages in thread
From: Pasha Tatashin @ 2023-05-15 16:28 UTC (permalink / raw)
  To: Ruihan Li
  Cc: linux-mm, linux-usb, linux-kernel, David Hildenbrand,
	Matthew Wilcox, Andrew Morton, Christoph Hellwig, Alan Stern,
	Greg Kroah-Hartman, syzbot+fcf1a817ceb50935ce99, stable

On Mon, May 15, 2023 at 9:10 AM Ruihan Li <lrh2000@pku.edu.cn> wrote:
>
> The current uses of PageAnon in page table check functions can lead to
> type confusion bugs between struct page and slab [1], if slab pages are
> accidentally mapped into the user space. This is because slab reuses the
> bits in struct page to store its internal states, which renders PageAnon
> ineffective on slab pages.
>
> Since slab pages are not expected to be mapped into the user space, this
> patch adds BUG_ON(PageSlab(page)) checks to make sure that slab pages
> are not inadvertently mapped. Otherwise, there must be some bugs in the
> kernel.
>
> Reported-by: syzbot+fcf1a817ceb50935ce99@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/lkml/000000000000258e5e05fae79fc1@google.com/ [1]
> Fixes: df4e817b7108 ("mm: page table check")
> Cc: <stable@vger.kernel.org> # 5.17
> Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>

Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>

I would also update order in mm/memory.c
static int validate_page_before_insert(struct page *page)
{
if (PageAnon(page) || PageSlab(page) || page_has_type(page))

It is not strictly a bug there, as it works by accident, but
PageSlab() should go before PageAnon(), because without checking if
this is PageSlab() we should not be testing for PageAnon().

Thanks you,
Pasha

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/4] mm: page_table_check: Make it dependent on EXCLUSIVE_SYSTEM_RAM
  2023-05-15 13:09 ` [PATCH v2 3/4] mm: page_table_check: Make it dependent on EXCLUSIVE_SYSTEM_RAM Ruihan Li
@ 2023-05-15 16:36   ` Pasha Tatashin
  2023-05-16 12:55   ` David Hildenbrand
  1 sibling, 0 replies; 16+ messages in thread
From: Pasha Tatashin @ 2023-05-15 16:36 UTC (permalink / raw)
  To: Ruihan Li
  Cc: linux-mm, linux-usb, linux-kernel, David Hildenbrand,
	Matthew Wilcox, Andrew Morton, Christoph Hellwig, Alan Stern,
	Greg Kroah-Hartman, stable

On Mon, May 15, 2023 at 9:10 AM Ruihan Li <lrh2000@pku.edu.cn> wrote:
>
> Without EXCLUSIVE_SYSTEM_RAM, users are allowed to map arbitrary
> physical memory regions into the userspace via /dev/mem. At the same
> time, pages may change their properties (e.g., from anonymous pages to
> named pages) while they are still being mapped in the userspace, leading
> to "corruption" detected by the page table check.
>
> To avoid these false positives, this patch makes PAGE_TABLE_CHECK
> depends on EXCLUSIVE_SYSTEM_RAM. This dependency is understandable
> because PAGE_TABLE_CHECK is a hardening technique but /dev/mem without
> STRICT_DEVMEM (i.e., !EXCLUSIVE_SYSTEM_RAM) is itself a security
> problem.
>
> Even with EXCLUSIVE_SYSTEM_RAM, I/O pages may be still allowed to be
> mapped via /dev/mem. However, these pages are always considered as named
> pages, so they won't break the logic used in the page table check.
>
> Cc: <stable@vger.kernel.org> # 5.17
> Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>

Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>

Thank you,
Pasha

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 2/4] usb: usbfs: Use consistent mmap functions
  2023-05-15 16:07   ` David Laight
@ 2023-05-16 11:42     ` Ruihan Li
  0 siblings, 0 replies; 16+ messages in thread
From: Ruihan Li @ 2023-05-16 11:42 UTC (permalink / raw)
  To: David Laight
  Cc: linux-mm, linux-usb, linux-kernel, Pasha Tatashin,
	David Hildenbrand, Matthew Wilcox, Andrew Morton,
	Christoph Hellwig, Alan Stern, Greg Kroah-Hartman, stable,
	Ruihan Li

On Mon, May 15, 2023 at 04:07:01PM +0000, David Laight wrote:
> 
> From: Ruihan Li
> > Sent: 15 May 2023 14:10
> > 
> > When hcd->localmem_pool is non-null, localmem_pool is used to allocate
> > DMA memory. In this case, the dma address will be properly returned (in
> > dma_handle), and dma_mmap_coherent should be used to map this memory
> > into the user space. However, the current implementation uses
> > pfn_remap_range, which is supposed to map normal pages.
> 
> I've an (out of tree) driver that does the same.
> Am I right in thinking that this does still work?

Generally, it still works most of the time, but it can break sometimes.
I am going to quote commit 2bef9aed6f0e ("usb: usbfs: correct
kernel->user page attribute mismatch"), which introduces
dma_mmap_coherent in usbdev_mmap, and says [1]:

	On some architectures (e.g. arm64) requests for
	IO coherent memory may use non-cachable attributes if
	the relevant device isn't cache coherent. If these
	pages are then remapped into userspace as cacheable,
	they may not be coherent with the non-cacheable mappings.

 [1] https://lore.kernel.org/all/20200504201348.1183246-1-jeremy.linton@arm.com/

I think it means that if your driver deals with devices that aren't
cache-coherent on arm64, using pfn_remap_range directly may cause
problems. Otherwise, you may need to check the arch-specific dma mmap
operation and see if it performs additional things that pfn_remap_range
does not (for the arm example, arm_iommu_mmap_attrs updates the
vm_page_prot field to make the pages non-cacheable if the device is not
cache-coherent [2]).

 [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm/mm/dma-mapping.c?id=f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6#n1129

> 
> I can't change the driver to use dma_map_coherent() because it
> doesn't let me mmap from a page offset within a 16k allocation.
> 
> In this case the memory area is an 8MB shared transfer area to an
> FPGA PCIe target sparsely filled with 16kB allocation (max 512 allocs).
> The discontinuous physical memory blocks appear as logically
> contiguous to both the FPGA logic and when mapped to userspace.
> (But not to driver code.)
> 
> I don't really want to expose the 16k allocation size to userspace.
> If we need more than 8MB then the allocation size would need
> changing.
> 
> 	David
> 
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)

Thanks,
Ruihan Li


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/4] mm: page_table_check: Ensure user pages are not slab pages
  2023-05-15 16:28   ` Pasha Tatashin
@ 2023-05-16 11:51     ` Ruihan Li
  2023-05-16 12:54       ` David Hildenbrand
  0 siblings, 1 reply; 16+ messages in thread
From: Ruihan Li @ 2023-05-16 11:51 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: linux-mm, linux-usb, linux-kernel, David Hildenbrand,
	Matthew Wilcox, Andrew Morton, Christoph Hellwig, Alan Stern,
	Greg Kroah-Hartman, syzbot+fcf1a817ceb50935ce99, stable,
	Ruihan Li

On Mon, May 15, 2023 at 12:28:54PM -0400, Pasha Tatashin wrote:
> 
> On Mon, May 15, 2023 at 9:10 AM Ruihan Li <lrh2000@pku.edu.cn> wrote:
> >
> > The current uses of PageAnon in page table check functions can lead to
> > type confusion bugs between struct page and slab [1], if slab pages are
> > accidentally mapped into the user space. This is because slab reuses the
> > bits in struct page to store its internal states, which renders PageAnon
> > ineffective on slab pages.
> >
> > Since slab pages are not expected to be mapped into the user space, this
> > patch adds BUG_ON(PageSlab(page)) checks to make sure that slab pages
> > are not inadvertently mapped. Otherwise, there must be some bugs in the
> > kernel.
> >
> > Reported-by: syzbot+fcf1a817ceb50935ce99@syzkaller.appspotmail.com
> > Closes: https://lore.kernel.org/lkml/000000000000258e5e05fae79fc1@google.com/ [1]
> > Fixes: df4e817b7108 ("mm: page table check")
> > Cc: <stable@vger.kernel.org> # 5.17
> > Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>
> 
> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> 
> I would also update order in mm/memory.c
> static int validate_page_before_insert(struct page *page)
> {
> if (PageAnon(page) || PageSlab(page) || page_has_type(page))
> 
> It is not strictly a bug there, as it works by accident, but
> PageSlab() should go before PageAnon(), because without checking if
> this is PageSlab() we should not be testing for PageAnon().

Right. Perhaps it would be better to send another patch for this
separately.

> 
> Thanks you,
> Pasha

Thanks,
Ruihan Li


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/4] mm: page_table_check: Ensure user pages are not slab pages
  2023-05-16 11:51     ` Ruihan Li
@ 2023-05-16 12:54       ` David Hildenbrand
  2023-05-16 14:14         ` Pasha Tatashin
  2023-05-16 14:17         ` Ruihan Li
  0 siblings, 2 replies; 16+ messages in thread
From: David Hildenbrand @ 2023-05-16 12:54 UTC (permalink / raw)
  To: Ruihan Li, Pasha Tatashin
  Cc: linux-mm, linux-usb, linux-kernel, Matthew Wilcox, Andrew Morton,
	Christoph Hellwig, Alan Stern, Greg Kroah-Hartman,
	syzbot+fcf1a817ceb50935ce99, stable

On 16.05.23 13:51, Ruihan Li wrote:
> On Mon, May 15, 2023 at 12:28:54PM -0400, Pasha Tatashin wrote:
>>
>> On Mon, May 15, 2023 at 9:10 AM Ruihan Li <lrh2000@pku.edu.cn> wrote:
>>>
>>> The current uses of PageAnon in page table check functions can lead to
>>> type confusion bugs between struct page and slab [1], if slab pages are
>>> accidentally mapped into the user space. This is because slab reuses the
>>> bits in struct page to store its internal states, which renders PageAnon
>>> ineffective on slab pages.
>>>
>>> Since slab pages are not expected to be mapped into the user space, this
>>> patch adds BUG_ON(PageSlab(page)) checks to make sure that slab pages
>>> are not inadvertently mapped. Otherwise, there must be some bugs in the
>>> kernel.
>>>
>>> Reported-by: syzbot+fcf1a817ceb50935ce99@syzkaller.appspotmail.com
>>> Closes: https://lore.kernel.org/lkml/000000000000258e5e05fae79fc1@google.com/ [1]
>>> Fixes: df4e817b7108 ("mm: page table check")
>>> Cc: <stable@vger.kernel.org> # 5.17
>>> Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>
>>
>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>>
>> I would also update order in mm/memory.c
>> static int validate_page_before_insert(struct page *page)
>> {
>> if (PageAnon(page) || PageSlab(page) || page_has_type(page))
>>
>> It is not strictly a bug there, as it works by accident, but
>> PageSlab() should go before PageAnon(), because without checking if
>> this is PageSlab() we should not be testing for PageAnon().
> 
> Right. Perhaps it would be better to send another patch for this
> separately.

Probably not really worth it IMHO. With PageSlab() we might have 
PageAnon() false-positives. Either will take the same path here ...

On a related note, stable_page_flags() checks PageKsm()/PageAnon() 
without caring about PageSlab().

At least it's just a debugging interface and will indicate KPF_SLAB in 
any case as well ...

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/4] mm: page_table_check: Make it dependent on EXCLUSIVE_SYSTEM_RAM
  2023-05-15 13:09 ` [PATCH v2 3/4] mm: page_table_check: Make it dependent on EXCLUSIVE_SYSTEM_RAM Ruihan Li
  2023-05-15 16:36   ` Pasha Tatashin
@ 2023-05-16 12:55   ` David Hildenbrand
  1 sibling, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2023-05-16 12:55 UTC (permalink / raw)
  To: Ruihan Li, linux-mm, linux-usb
  Cc: linux-kernel, Pasha Tatashin, Matthew Wilcox, Andrew Morton,
	Christoph Hellwig, Alan Stern, Greg Kroah-Hartman, stable

On 15.05.23 15:09, Ruihan Li wrote:
> Without EXCLUSIVE_SYSTEM_RAM, users are allowed to map arbitrary
> physical memory regions into the userspace via /dev/mem. At the same
> time, pages may change their properties (e.g., from anonymous pages to
> named pages) while they are still being mapped in the userspace, leading
> to "corruption" detected by the page table check.
> 
> To avoid these false positives, this patch makes PAGE_TABLE_CHECK
> depends on EXCLUSIVE_SYSTEM_RAM. This dependency is understandable
> because PAGE_TABLE_CHECK is a hardening technique but /dev/mem without
> STRICT_DEVMEM (i.e., !EXCLUSIVE_SYSTEM_RAM) is itself a security
> problem.
> 
> Even with EXCLUSIVE_SYSTEM_RAM, I/O pages may be still allowed to be
> mapped via /dev/mem. However, these pages are always considered as named
> pages, so they won't break the logic used in the page table check.
> 
> Cc: <stable@vger.kernel.org> # 5.17
> Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/4] mm: page_table_check: Ensure user pages are not slab pages
  2023-05-16 12:54       ` David Hildenbrand
@ 2023-05-16 14:14         ` Pasha Tatashin
  2023-05-16 14:17         ` Ruihan Li
  1 sibling, 0 replies; 16+ messages in thread
From: Pasha Tatashin @ 2023-05-16 14:14 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Ruihan Li, linux-mm, linux-usb, linux-kernel, Matthew Wilcox,
	Andrew Morton, Christoph Hellwig, Alan Stern, Greg Kroah-Hartman,
	syzbot+fcf1a817ceb50935ce99, stable

> >> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> >>
> >> I would also update order in mm/memory.c
> >> static int validate_page_before_insert(struct page *page)
> >> {
> >> if (PageAnon(page) || PageSlab(page) || page_has_type(page))
> >>
> >> It is not strictly a bug there, as it works by accident, but
> >> PageSlab() should go before PageAnon(), because without checking if
> >> this is PageSlab() we should not be testing for PageAnon().
> >
> > Right. Perhaps it would be better to send another patch for this
> > separately.

Yes, as a separate from this series patch would work.

>
> Probably not really worth it IMHO. With PageSlab() we might have
> PageAnon() false-positives. Either will take the same path here ...

That is correct, it works by accident, but it is not a good idea to
keep a broken logic at least because it may be copied into other
places.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/4] mm: page_table_check: Ensure user pages are not slab pages
  2023-05-16 12:54       ` David Hildenbrand
  2023-05-16 14:14         ` Pasha Tatashin
@ 2023-05-16 14:17         ` Ruihan Li
  1 sibling, 0 replies; 16+ messages in thread
From: Ruihan Li @ 2023-05-16 14:17 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Pasha Tatashin, linux-mm, linux-usb, linux-kernel,
	Matthew Wilcox, Andrew Morton, Christoph Hellwig, Alan Stern,
	Greg Kroah-Hartman, syzbot+fcf1a817ceb50935ce99, stable,
	Ruihan Li

On Tue, May 16, 2023 at 02:54:04PM +0200, David Hildenbrand wrote:
> 
> On 16.05.23 13:51, Ruihan Li wrote:
> > On Mon, May 15, 2023 at 12:28:54PM -0400, Pasha Tatashin wrote:
> > > 
> > > On Mon, May 15, 2023 at 9:10 AM Ruihan Li <lrh2000@pku.edu.cn> wrote:
> > > > 
> > > > The current uses of PageAnon in page table check functions can lead to
> > > > type confusion bugs between struct page and slab [1], if slab pages are
> > > > accidentally mapped into the user space. This is because slab reuses the
> > > > bits in struct page to store its internal states, which renders PageAnon
> > > > ineffective on slab pages.
> > > > 
> > > > Since slab pages are not expected to be mapped into the user space, this
> > > > patch adds BUG_ON(PageSlab(page)) checks to make sure that slab pages
> > > > are not inadvertently mapped. Otherwise, there must be some bugs in the
> > > > kernel.
> > > > 
> > > > Reported-by: syzbot+fcf1a817ceb50935ce99@syzkaller.appspotmail.com
> > > > Closes: https://lore.kernel.org/lkml/000000000000258e5e05fae79fc1@google.com/ [1]
> > > > Fixes: df4e817b7108 ("mm: page table check")
> > > > Cc: <stable@vger.kernel.org> # 5.17
> > > > Signed-off-by: Ruihan Li <lrh2000@pku.edu.cn>
> > > 
> > > Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> > > 
> > > I would also update order in mm/memory.c
> > > static int validate_page_before_insert(struct page *page)
> > > {
> > > if (PageAnon(page) || PageSlab(page) || page_has_type(page))
> > > 
> > > It is not strictly a bug there, as it works by accident, but
> > > PageSlab() should go before PageAnon(), because without checking if
> > > this is PageSlab() we should not be testing for PageAnon().
> > 
> > Right. Perhaps it would be better to send another patch for this
> > separately.
> 
> Probably not really worth it IMHO. With PageSlab() we might have PageAnon()
> false-positives. Either will take the same path here ...

Well, I'm not against that. If just fixing this one doesn't look
worthwhile, I'm not sure if anyone wishes to find and clean up all these
"misuses" altogether, though that's certainly a low-priority task if
nothing is actually broken.

> 
> On a related note, stable_page_flags() checks PageKsm()/PageAnon() without
> caring about PageSlab().
> 
> At least it's just a debugging interface and will indicate KPF_SLAB in any
> case as well ...

I just went through that function quickly, and found that PageHuge also
seems to be accessing non-existent fields (folio->_folio_dtor) on slab
pages. Again, nothing is really broken.

> 
> -- 
> Thanks,
> 
> David / dhildenb

Thanks,
Ruihan Li


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 1/4] usb: usbfs: Enforce page requirements for mmap
  2023-05-15 13:09 ` [PATCH v2 1/4] usb: usbfs: Enforce page requirements for mmap Ruihan Li
  2023-05-15 14:07   ` Alan Stern
@ 2023-05-17  6:22   ` Christoph Hellwig
  1 sibling, 0 replies; 16+ messages in thread
From: Christoph Hellwig @ 2023-05-17  6:22 UTC (permalink / raw)
  To: Ruihan Li
  Cc: linux-mm, linux-usb, linux-kernel, Pasha Tatashin,
	David Hildenbrand, Matthew Wilcox, Andrew Morton,
	Christoph Hellwig, Alan Stern, Greg Kroah-Hartman,
	syzbot+fcf1a817ceb50935ce99, stable

On Mon, May 15, 2023 at 09:09:55PM +0800, Ruihan Li wrote:
> To address these issues, this patch introduces hcd_alloc_coherent_pages,
> which addresses the above two problems. Specifically,
> hcd_alloc_coherent_pages uses gen_pool_dma_alloc_align instead of
> gen_pool_dma_alloc to ensure that the memory is page-aligned. To replace
> kmalloc, hcd_alloc_coherent_pages directly allocates pages by calling
> __get_free_pages.

This looks reasonable in that it fixes the bug.  But I really don't
like how it makes the mess of USB allocation APIs even messier :P

Not really your faul, but someone really needs to look into the usb
memory allocators and DMA mapping, which is tied to that and just as
bad.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-05-17  6:23 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-15 13:09 [PATCH v2 0/4] Fix type confusion in page_table_check Ruihan Li
2023-05-15 13:09 ` [PATCH v2 1/4] usb: usbfs: Enforce page requirements for mmap Ruihan Li
2023-05-15 14:07   ` Alan Stern
2023-05-17  6:22   ` Christoph Hellwig
2023-05-15 13:09 ` [PATCH v2 2/4] usb: usbfs: Use consistent mmap functions Ruihan Li
2023-05-15 16:07   ` David Laight
2023-05-16 11:42     ` Ruihan Li
2023-05-15 13:09 ` [PATCH v2 3/4] mm: page_table_check: Make it dependent on EXCLUSIVE_SYSTEM_RAM Ruihan Li
2023-05-15 16:36   ` Pasha Tatashin
2023-05-16 12:55   ` David Hildenbrand
2023-05-15 13:09 ` [PATCH v2 4/4] mm: page_table_check: Ensure user pages are not slab pages Ruihan Li
2023-05-15 16:28   ` Pasha Tatashin
2023-05-16 11:51     ` Ruihan Li
2023-05-16 12:54       ` David Hildenbrand
2023-05-16 14:14         ` Pasha Tatashin
2023-05-16 14:17         ` Ruihan Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.