All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] mm/smaps: Fixes and optimizations on shmem swap handling
@ 2021-09-17 16:47 Peter Xu
  2021-09-17 16:47 ` [PATCH 1/3] mm/smaps: Fix shmem pte hole swap calculation Peter Xu
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Peter Xu @ 2021-09-17 16:47 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Vlastimil Babka, Andrew Morton, Hugh Dickins, Andrea Arcangeli,
	peterx, Matthew Wilcox

This series grows from the patch previously posted here:

  [PATCH] mm/smaps: Use vma->vm_pgoff directly when counting partial swap
  https://lore.kernel.org/lkml/20210916215839.95177-1-peterx@redhat.com/

Vlastimil reported a bug that is even more important to fix than the cleanup,
so I put it as patch 1 here.  There's a test program we can use to verify the
bug before/after the patch.  I used the same program to test patch 2/3 because
it covers walking shmem swap both in page cache and in pgtables.

Patch 2 is the original patch, though with a tiny touchup as Vlastimil
suggested.

Patch 3 is a further cleanup of the shmem swap logic, hopefully make it even
cleaner.

Please review, thanks.

Peter Xu (3):
  mm/smaps: Fix shmem pte hole swap calculation
  mm/smaps: Use vma->vm_pgoff directly when counting partial swap
  mm/smaps: Simplify shmem handling of pte holes

 fs/proc/task_mmu.c | 28 ++++++++++++++++------------
 mm/shmem.c         |  5 ++---
 2 files changed, 18 insertions(+), 15 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] mm/smaps: Fix shmem pte hole swap calculation
  2021-09-17 16:47 [PATCH 0/3] mm/smaps: Fixes and optimizations on shmem swap handling Peter Xu
@ 2021-09-17 16:47 ` Peter Xu
  2021-09-22 10:40   ` Vlastimil Babka
  2021-09-17 16:47 ` [PATCH 2/3] mm/smaps: Use vma->vm_pgoff directly when counting partial swap Peter Xu
  2021-09-17 16:47 ` [PATCH 3/3] mm/smaps: Simplify shmem handling of pte holes Peter Xu
  2 siblings, 1 reply; 8+ messages in thread
From: Peter Xu @ 2021-09-17 16:47 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Vlastimil Babka, Andrew Morton, Hugh Dickins, Andrea Arcangeli,
	peterx, Matthew Wilcox

The shmem swap calculation on the privately writable mappings are using wrong
parameters as spotted by Vlastimil.  Fix them.  That's introduced in commit
48131e03ca4e, when rework shmem_swap_usage to shmem_partial_swap_usage.

Test program:

==================

void main(void)
{
    char *buffer, *p;
    int i, fd;

    fd = memfd_create("test", 0);
    assert(fd > 0);

    /* isize==2M*3, fill in pages, swap them out */
    ftruncate(fd, SIZE_2M * 3);
    buffer = mmap(NULL, SIZE_2M * 3, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
    assert(buffer);
    for (i = 0, p = buffer; i < SIZE_2M * 3 / 4096; i++) {
        *p = 1;
        p += 4096;
    }
    madvise(buffer, SIZE_2M * 3, MADV_PAGEOUT);
    munmap(buffer, SIZE_2M * 3);

    /*
     * Remap with private+writtable mappings on partial of the inode (<= 2M*3),
     * while the size must also be >= 2M*2 to make sure there's a none pmd so
     * smaps_pte_hole will be triggered.
     */
    buffer = mmap(NULL, SIZE_2M * 2, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
    printf("pid=%d, buffer=%p\n", getpid(), buffer);

    /* Check /proc/$PID/smap_rollup, should see 4MB swap */
    sleep(1000000);
}
==================

Before the patch, smaps_rollup shows <4MB swap and the number will be random
depending on the alignment of the buffer of mmap() allocated.  After this
patch, it'll show 4MB.

Fixes: 48131e03ca4e ("mm, proc: reduce cost of /proc/pid/smaps for unpopulated shmem mappings")
Reported-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 fs/proc/task_mmu.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index cf25be3e0321..2197f669e17b 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -478,9 +478,11 @@ static int smaps_pte_hole(unsigned long addr, unsigned long end,
 			  __always_unused int depth, struct mm_walk *walk)
 {
 	struct mem_size_stats *mss = walk->private;
+	struct vm_area_struct *vma = walk->vma;
 
-	mss->swap += shmem_partial_swap_usage(
-			walk->vma->vm_file->f_mapping, addr, end);
+	mss->swap += shmem_partial_swap_usage(walk->vma->vm_file->f_mapping,
+					      linear_page_index(vma, addr),
+					      linear_page_index(vma, end));
 
 	return 0;
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] mm/smaps: Use vma->vm_pgoff directly when counting partial swap
  2021-09-17 16:47 [PATCH 0/3] mm/smaps: Fixes and optimizations on shmem swap handling Peter Xu
  2021-09-17 16:47 ` [PATCH 1/3] mm/smaps: Fix shmem pte hole swap calculation Peter Xu
@ 2021-09-17 16:47 ` Peter Xu
  2021-09-22 10:41   ` Vlastimil Babka
  2021-09-17 16:47 ` [PATCH 3/3] mm/smaps: Simplify shmem handling of pte holes Peter Xu
  2 siblings, 1 reply; 8+ messages in thread
From: Peter Xu @ 2021-09-17 16:47 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Vlastimil Babka, Andrew Morton, Hugh Dickins, Andrea Arcangeli,
	peterx, Matthew Wilcox

As it's trying to cover the whole vma anyways, use direct vm_pgoff value and
vma_pages() rather than linear_page_index.

Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/shmem.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 96ccf6e941aa..d2620db8c938 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -856,9 +856,8 @@ unsigned long shmem_swap_usage(struct vm_area_struct *vma)
 		return swapped << PAGE_SHIFT;
 
 	/* Here comes the more involved part */
-	return shmem_partial_swap_usage(mapping,
-			linear_page_index(vma, vma->vm_start),
-			linear_page_index(vma, vma->vm_end));
+	return shmem_partial_swap_usage(mapping, vma->vm_pgoff,
+					vma->vm_pgoff + vma_pages(vma));
 }
 
 /*
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] mm/smaps: Simplify shmem handling of pte holes
  2021-09-17 16:47 [PATCH 0/3] mm/smaps: Fixes and optimizations on shmem swap handling Peter Xu
  2021-09-17 16:47 ` [PATCH 1/3] mm/smaps: Fix shmem pte hole swap calculation Peter Xu
  2021-09-17 16:47 ` [PATCH 2/3] mm/smaps: Use vma->vm_pgoff directly when counting partial swap Peter Xu
@ 2021-09-17 16:47 ` Peter Xu
  2021-10-05 11:15   ` Vlastimil Babka
  2 siblings, 1 reply; 8+ messages in thread
From: Peter Xu @ 2021-09-17 16:47 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Vlastimil Babka, Andrew Morton, Hugh Dickins, Andrea Arcangeli,
	peterx, Matthew Wilcox

Firstly, check_shmem_swap variable is actually not necessary, because it's
always set with pte_hole hook; checking each would work.

Meanwhile, the check within smaps_pte_entry is not easy to follow.  E.g.,
pte_none() check is not needed as "!pte_present && !is_swap_pte" is the same.
Since at it, use the pte_hole() helper rather than dup the page cache lookup.

Still keep the CONFIG_SHMEM part so the code can be optimized to nop for !SHMEM.

There will be a very slight functional change in smaps_pte_entry(), that for
!SHMEM we'll return early for pte_none (before checking page==NULL), but that's
even nicer.

Cc: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 fs/proc/task_mmu.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 2197f669e17b..ad667dbc96f5 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -397,7 +397,6 @@ struct mem_size_stats {
 	u64 pss_shmem;
 	u64 pss_locked;
 	u64 swap_pss;
-	bool check_shmem_swap;
 };
 
 static void smaps_page_accumulate(struct mem_size_stats *mss,
@@ -490,6 +489,16 @@ static int smaps_pte_hole(unsigned long addr, unsigned long end,
 #define smaps_pte_hole		NULL
 #endif /* CONFIG_SHMEM */
 
+static void smaps_pte_hole_lookup(unsigned long addr, struct mm_walk *walk)
+{
+#ifdef CONFIG_SHMEM
+	if (walk->ops->pte_hole) {
+		/* depth is not used */
+		smaps_pte_hole(addr, addr + PAGE_SIZE, 0, walk);
+	}
+#endif
+}
+
 static void smaps_pte_entry(pte_t *pte, unsigned long addr,
 		struct mm_walk *walk)
 {
@@ -518,12 +527,8 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
 			}
 		} else if (is_pfn_swap_entry(swpent))
 			page = pfn_swap_entry_to_page(swpent);
-	} else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap
-							&& pte_none(*pte))) {
-		page = xa_load(&vma->vm_file->f_mapping->i_pages,
-						linear_page_index(vma, addr));
-		if (xa_is_value(page))
-			mss->swap += PAGE_SIZE;
+	} else {
+		smaps_pte_hole_lookup(addr, walk);
 		return;
 	}
 
@@ -737,8 +742,6 @@ static void smap_gather_stats(struct vm_area_struct *vma,
 		return;
 
 #ifdef CONFIG_SHMEM
-	/* In case of smaps_rollup, reset the value from previous vma */
-	mss->check_shmem_swap = false;
 	if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) {
 		/*
 		 * For shared or readonly shmem mappings we know that all
@@ -756,7 +759,6 @@ static void smap_gather_stats(struct vm_area_struct *vma,
 					!(vma->vm_flags & VM_WRITE))) {
 			mss->swap += shmem_swapped;
 		} else {
-			mss->check_shmem_swap = true;
 			ops = &smaps_shmem_walk_ops;
 		}
 	}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/3] mm/smaps: Fix shmem pte hole swap calculation
  2021-09-17 16:47 ` [PATCH 1/3] mm/smaps: Fix shmem pte hole swap calculation Peter Xu
@ 2021-09-22 10:40   ` Vlastimil Babka
  0 siblings, 0 replies; 8+ messages in thread
From: Vlastimil Babka @ 2021-09-22 10:40 UTC (permalink / raw)
  To: Peter Xu, linux-mm, linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Andrea Arcangeli, Matthew Wilcox

On 9/17/21 18:47, Peter Xu wrote:
> The shmem swap calculation on the privately writable mappings are using wrong
> parameters as spotted by Vlastimil.  Fix them.  That's introduced in commit
> 48131e03ca4e, when rework shmem_swap_usage to shmem_partial_swap_usage.
> 
> Test program:
> 
> ==================
> 
> void main(void)
> {
>     char *buffer, *p;
>     int i, fd;
> 
>     fd = memfd_create("test", 0);
>     assert(fd > 0);
> 
>     /* isize==2M*3, fill in pages, swap them out */
>     ftruncate(fd, SIZE_2M * 3);
>     buffer = mmap(NULL, SIZE_2M * 3, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>     assert(buffer);
>     for (i = 0, p = buffer; i < SIZE_2M * 3 / 4096; i++) {
>         *p = 1;
>         p += 4096;
>     }
>     madvise(buffer, SIZE_2M * 3, MADV_PAGEOUT);
>     munmap(buffer, SIZE_2M * 3);
> 
>     /*
>      * Remap with private+writtable mappings on partial of the inode (<= 2M*3),
>      * while the size must also be >= 2M*2 to make sure there's a none pmd so
>      * smaps_pte_hole will be triggered.
>      */
>     buffer = mmap(NULL, SIZE_2M * 2, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
>     printf("pid=%d, buffer=%p\n", getpid(), buffer);
> 
>     /* Check /proc/$PID/smap_rollup, should see 4MB swap */
>     sleep(1000000);
> }
> ==================
> 
> Before the patch, smaps_rollup shows <4MB swap and the number will be random
> depending on the alignment of the buffer of mmap() allocated.  After this
> patch, it'll show 4MB.
> 
> Fixes: 48131e03ca4e ("mm, proc: reduce cost of /proc/pid/smaps for unpopulated shmem mappings")

Thanks, too bad I didn't spot it when sending that patch :)

> Reported-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Peter Xu <peterx@redhat.com>

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  fs/proc/task_mmu.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index cf25be3e0321..2197f669e17b 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -478,9 +478,11 @@ static int smaps_pte_hole(unsigned long addr, unsigned long end,
>  			  __always_unused int depth, struct mm_walk *walk)
>  {
>  	struct mem_size_stats *mss = walk->private;
> +	struct vm_area_struct *vma = walk->vma;
>  
> -	mss->swap += shmem_partial_swap_usage(
> -			walk->vma->vm_file->f_mapping, addr, end);
> +	mss->swap += shmem_partial_swap_usage(walk->vma->vm_file->f_mapping,
> +					      linear_page_index(vma, addr),
> +					      linear_page_index(vma, end));
>  
>  	return 0;
>  }
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] mm/smaps: Use vma->vm_pgoff directly when counting partial swap
  2021-09-17 16:47 ` [PATCH 2/3] mm/smaps: Use vma->vm_pgoff directly when counting partial swap Peter Xu
@ 2021-09-22 10:41   ` Vlastimil Babka
  0 siblings, 0 replies; 8+ messages in thread
From: Vlastimil Babka @ 2021-09-22 10:41 UTC (permalink / raw)
  To: Peter Xu, linux-mm, linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Andrea Arcangeli, Matthew Wilcox

On 9/17/21 18:47, Peter Xu wrote:
> As it's trying to cover the whole vma anyways, use direct vm_pgoff value and
> vma_pages() rather than linear_page_index.
> 
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Hugh Dickins <hughd@google.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/shmem.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 96ccf6e941aa..d2620db8c938 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -856,9 +856,8 @@ unsigned long shmem_swap_usage(struct vm_area_struct *vma)
>  		return swapped << PAGE_SHIFT;
>  
>  	/* Here comes the more involved part */
> -	return shmem_partial_swap_usage(mapping,
> -			linear_page_index(vma, vma->vm_start),
> -			linear_page_index(vma, vma->vm_end));
> +	return shmem_partial_swap_usage(mapping, vma->vm_pgoff,
> +					vma->vm_pgoff + vma_pages(vma));
>  }
>  
>  /*
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] mm/smaps: Simplify shmem handling of pte holes
  2021-09-17 16:47 ` [PATCH 3/3] mm/smaps: Simplify shmem handling of pte holes Peter Xu
@ 2021-10-05 11:15   ` Vlastimil Babka
  2021-10-05 14:40     ` Peter Xu
  0 siblings, 1 reply; 8+ messages in thread
From: Vlastimil Babka @ 2021-10-05 11:15 UTC (permalink / raw)
  To: Peter Xu, linux-mm, linux-kernel
  Cc: Andrew Morton, Hugh Dickins, Andrea Arcangeli, Matthew Wilcox

On 9/17/21 18:47, Peter Xu wrote:
> Firstly, check_shmem_swap variable is actually not necessary, because it's
> always set with pte_hole hook; checking each would work.

Right...

> Meanwhile, the check within smaps_pte_entry is not easy to follow.  E.g.,
> pte_none() check is not needed as "!pte_present && !is_swap_pte" is the same.

Seems to be true, indeed.

> Since at it, use the pte_hole() helper rather than dup the page cache lookup.

pte_hole() is for checking a range and we are calling it for single page,
isnt't that causing larger overhead in the end? There's xarray involved, so
maybe Matthew will know best.

> Still keep the CONFIG_SHMEM part so the code can be optimized to nop for !SHMEM.
> 
> There will be a very slight functional change in smaps_pte_entry(), that for
> !SHMEM we'll return early for pte_none (before checking page==NULL), but that's
> even nicer.

I don't think this is true, 'unlikely(IS_ENABLED(CONFIG_SHMEM))' will be a
compile-time constant false and shortcut the rest of the 'if' evaluation
thus there will be no page check? Or I misunderstood.

> Cc: Hugh Dickins <hughd@google.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  fs/proc/task_mmu.c | 22 ++++++++++++----------
>  1 file changed, 12 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 2197f669e17b..ad667dbc96f5 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -397,7 +397,6 @@ struct mem_size_stats {
>  	u64 pss_shmem;
>  	u64 pss_locked;
>  	u64 swap_pss;
> -	bool check_shmem_swap;
>  };
>  
>  static void smaps_page_accumulate(struct mem_size_stats *mss,
> @@ -490,6 +489,16 @@ static int smaps_pte_hole(unsigned long addr, unsigned long end,
>  #define smaps_pte_hole		NULL
>  #endif /* CONFIG_SHMEM */
>  
> +static void smaps_pte_hole_lookup(unsigned long addr, struct mm_walk *walk)
> +{
> +#ifdef CONFIG_SHMEM
> +	if (walk->ops->pte_hole) {
> +		/* depth is not used */
> +		smaps_pte_hole(addr, addr + PAGE_SIZE, 0, walk);
> +	}
> +#endif
> +}
> +
>  static void smaps_pte_entry(pte_t *pte, unsigned long addr,
>  		struct mm_walk *walk)
>  {
> @@ -518,12 +527,8 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
>  			}
>  		} else if (is_pfn_swap_entry(swpent))
>  			page = pfn_swap_entry_to_page(swpent);
> -	} else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap
> -							&& pte_none(*pte))) {
> -		page = xa_load(&vma->vm_file->f_mapping->i_pages,
> -						linear_page_index(vma, addr));
> -		if (xa_is_value(page))
> -			mss->swap += PAGE_SIZE;
> +	} else {
> +		smaps_pte_hole_lookup(addr, walk);
>  		return;
>  	}
>  
> @@ -737,8 +742,6 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>  		return;
>  
>  #ifdef CONFIG_SHMEM
> -	/* In case of smaps_rollup, reset the value from previous vma */
> -	mss->check_shmem_swap = false;
>  	if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) {
>  		/*
>  		 * For shared or readonly shmem mappings we know that all
> @@ -756,7 +759,6 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>  					!(vma->vm_flags & VM_WRITE))) {
>  			mss->swap += shmem_swapped;
>  		} else {
> -			mss->check_shmem_swap = true;
>  			ops = &smaps_shmem_walk_ops;
>  		}
>  	}
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] mm/smaps: Simplify shmem handling of pte holes
  2021-10-05 11:15   ` Vlastimil Babka
@ 2021-10-05 14:40     ` Peter Xu
  0 siblings, 0 replies; 8+ messages in thread
From: Peter Xu @ 2021-10-05 14:40 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: linux-mm, linux-kernel, Andrew Morton, Hugh Dickins,
	Andrea Arcangeli, Matthew Wilcox

Hi, Vlastimil,

On Tue, Oct 05, 2021 at 01:15:05PM +0200, Vlastimil Babka wrote:
> > Since at it, use the pte_hole() helper rather than dup the page cache lookup.
> 
> pte_hole() is for checking a range and we are calling it for single page,
> isnt't that causing larger overhead in the end? There's xarray involved, so
> maybe Matthew will know best.

Per my understanding, pte_hole() calls xas_load() too at last, just like the
old code; it's just that the xas_for_each() of shmem_partial_swap_usage() will
only run one iteration, iiuc.

> 
> > Still keep the CONFIG_SHMEM part so the code can be optimized to nop for !SHMEM.
> > 
> > There will be a very slight functional change in smaps_pte_entry(), that for
> > !SHMEM we'll return early for pte_none (before checking page==NULL), but that's
> > even nicer.
> 
> I don't think this is true, 'unlikely(IS_ENABLED(CONFIG_SHMEM))' will be a
> compile-time constant false and shortcut the rest of the 'if' evaluation
> thus there will be no page check? Or I misunderstood.

The page check I was referring is this one in smaps_pte_entry():

	if (!page)
		return;

After the change, with !SHMEM the "else" block will be kept there (unlike the
old code as you mentioned it'll be optimized), the smaps_pte_hole_lookup() will
be noop so it'll be a direct "return" in that "else", then it should return a
bit earlier by not checking "!page" (because in that case pte_none must have
page==NULL).

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-10-05 14:41 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-17 16:47 [PATCH 0/3] mm/smaps: Fixes and optimizations on shmem swap handling Peter Xu
2021-09-17 16:47 ` [PATCH 1/3] mm/smaps: Fix shmem pte hole swap calculation Peter Xu
2021-09-22 10:40   ` Vlastimil Babka
2021-09-17 16:47 ` [PATCH 2/3] mm/smaps: Use vma->vm_pgoff directly when counting partial swap Peter Xu
2021-09-22 10:41   ` Vlastimil Babka
2021-09-17 16:47 ` [PATCH 3/3] mm/smaps: Simplify shmem handling of pte holes Peter Xu
2021-10-05 11:15   ` Vlastimil Babka
2021-10-05 14:40     ` Peter Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.