linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] Clean up and fixes for swap
@ 2022-12-08 18:02 Kairui Song
  2022-12-08 18:02 ` [PATCH 1/5] swapfile: get rid of volatile and avoid redundant read Kairui Song
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: Kairui Song @ 2022-12-08 18:02 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Andrew Morton, Miaohe Lin, David Hildenbrand,
	Huang, Ying, Hugh Dickins, Kairui Song

From: Kairui Song <kasong@tencent.com>

This series cleanup some code path, saves a few cycles and reduce the
object size by a bit, also fixes some rare race issue of statistics.

Kairui Song (5):
  swapfile: get rid of volatile and avoid redundant read
  swap: avoid a redundant pte map if ra window is 1
  swap: fold swap_ra_clamp_pfn into swap_ra_info
  swap: remove the swap lock in swap_cache_get_folio
  swap: avoid ra statistic lost when swapin races

 mm/shmem.c      |  8 +++++-
 mm/swap_state.c | 66 +++++++++++++++++++------------------------------
 mm/swapfile.c   |  7 +++---
 3 files changed, 36 insertions(+), 45 deletions(-)

-- 
2.35.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/5] swapfile: get rid of volatile and avoid redundant read
  2022-12-08 18:02 [PATCH 0/5] Clean up and fixes for swap Kairui Song
@ 2022-12-08 18:02 ` Kairui Song
  2022-12-09  2:48   ` Huang, Ying
  2022-12-08 18:02 ` [PATCH 2/5] swap: avoid a redundant pte map if ra window is 1 Kairui Song
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Kairui Song @ 2022-12-08 18:02 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Andrew Morton, Miaohe Lin, David Hildenbrand,
	Huang, Ying, Hugh Dickins, Kairui Song

From: Kairui Song <kasong@tencent.com>

Convert a volatile variable to more readable READ_ONCE. And this
actually avoids the code from reading the variable twice redundantly
when it races.

Signed-off-by: Kairui Song <kasong@tencent.com>
---
 mm/swapfile.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 72e481aacd5d..ff4f3cb85232 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1836,13 +1836,13 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 	pte_t *pte;
 	struct swap_info_struct *si;
 	int ret = 0;
-	volatile unsigned char *swap_map;
 
 	si = swap_info[type];
 	pte = pte_offset_map(pmd, addr);
 	do {
 		struct folio *folio;
 		unsigned long offset;
+		unsigned char swp_count;
 
 		if (!is_swap_pte(*pte))
 			continue;
@@ -1853,7 +1853,6 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 
 		offset = swp_offset(entry);
 		pte_unmap(pte);
-		swap_map = &si->swap_map[offset];
 		folio = swap_cache_get_folio(entry, vma, addr);
 		if (!folio) {
 			struct page *page;
@@ -1870,8 +1869,10 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 				folio = page_folio(page);
 		}
 		if (!folio) {
-			if (*swap_map == 0 || *swap_map == SWAP_MAP_BAD)
+			swp_count = READ_ONCE(si->swap_map[offset]);
+			if (swp_count == 0 || swp_count == SWAP_MAP_BAD)
 				goto try_next;
+
 			return -ENOMEM;
 		}
 
-- 
2.35.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/5] swap: avoid a redundant pte map if ra window is 1
  2022-12-08 18:02 [PATCH 0/5] Clean up and fixes for swap Kairui Song
  2022-12-08 18:02 ` [PATCH 1/5] swapfile: get rid of volatile and avoid redundant read Kairui Song
@ 2022-12-08 18:02 ` Kairui Song
  2022-12-09  3:15   ` Huang, Ying
  2022-12-08 18:02 ` [PATCH 3/5] swap: fold swap_ra_clamp_pfn into swap_ra_info Kairui Song
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Kairui Song @ 2022-12-08 18:02 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Andrew Morton, Miaohe Lin, David Hildenbrand,
	Huang, Ying, Hugh Dickins, Kairui Song

From: Kairui Song <kasong@tencent.com>

Avoid a redundant pte map/unmap when swap readahead window is 1.

Signed-off-by: Kairui Song <kasong@tencent.com>
---
 mm/swap_state.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 438d0676c5be..60136bda78e3 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -730,8 +730,6 @@ static void swap_ra_info(struct vm_fault *vmf,
 	}
 
 	faddr = vmf->address;
-	orig_pte = pte = pte_offset_map(vmf->pmd, faddr);
-
 	fpfn = PFN_DOWN(faddr);
 	ra_val = GET_SWAP_RA_VAL(vma);
 	pfn = PFN_DOWN(SWAP_RA_ADDR(ra_val));
@@ -742,12 +740,11 @@ static void swap_ra_info(struct vm_fault *vmf,
 	atomic_long_set(&vma->swap_readahead_info,
 			SWAP_RA_VAL(faddr, win, 0));
 
-	if (win == 1) {
-		pte_unmap(orig_pte);
+	if (win == 1)
 		return;
-	}
 
 	/* Copy the PTEs because the page table may be unmapped */
+	orig_pte = pte = pte_offset_map(vmf->pmd, faddr);
 	if (fpfn == pfn + 1)
 		swap_ra_clamp_pfn(vma, faddr, fpfn, fpfn + win, &start, &end);
 	else if (pfn == fpfn + 1)
-- 
2.35.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/5] swap: fold swap_ra_clamp_pfn into swap_ra_info
  2022-12-08 18:02 [PATCH 0/5] Clean up and fixes for swap Kairui Song
  2022-12-08 18:02 ` [PATCH 1/5] swapfile: get rid of volatile and avoid redundant read Kairui Song
  2022-12-08 18:02 ` [PATCH 2/5] swap: avoid a redundant pte map if ra window is 1 Kairui Song
@ 2022-12-08 18:02 ` Kairui Song
  2022-12-08 19:08   ` Matthew Wilcox
  2022-12-09  3:23   ` Huang, Ying
  2022-12-08 18:02 ` [PATCH 4/5] swap: remove the swap lock in swap_cache_get_folio Kairui Song
  2022-12-08 18:02 ` [PATCH 5/5] swap: avoid ra statistic lost when swapin races Kairui Song
  4 siblings, 2 replies; 17+ messages in thread
From: Kairui Song @ 2022-12-08 18:02 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Andrew Morton, Miaohe Lin, David Hildenbrand,
	Huang, Ying, Hugh Dickins, Kairui Song

From: Kairui Song <kasong@tencent.com>

This make the code cleaner. This helper is made of only two line of
self explanational code and not reused anywhere else.

And this actually make the compiled object smaller by a bit:

          text    data     bss     dec     hex filename
Before:   9502     976      12   10490    28fa mm/swap_state.o
After:    9470     976      12   10458    28da mm/swap_state.o

Signed-off-by: Kairui Song <kasong@tencent.com>
---
 mm/swap_state.c | 44 +++++++++++++++++++-------------------------
 1 file changed, 19 insertions(+), 25 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 60136bda78e3..19089417abd1 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -696,28 +696,15 @@ void exit_swap_address_space(unsigned int type)
 	swapper_spaces[type] = NULL;
 }
 
-static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
-				     unsigned long faddr,
-				     unsigned long lpfn,
-				     unsigned long rpfn,
-				     unsigned long *start,
-				     unsigned long *end)
-{
-	*start = max3(lpfn, PFN_DOWN(vma->vm_start),
-		      PFN_DOWN(faddr & PMD_MASK));
-	*end = min3(rpfn, PFN_DOWN(vma->vm_end),
-		    PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
-}
-
 static void swap_ra_info(struct vm_fault *vmf,
-			struct vma_swap_readahead *ra_info)
+			 struct vma_swap_readahead *ra_info)
 {
 	struct vm_area_struct *vma = vmf->vma;
 	unsigned long ra_val;
-	unsigned long faddr, pfn, fpfn;
+	unsigned long faddr, pfn, fpfn, lpfn, rpfn;
 	unsigned long start, end;
 	pte_t *pte, *orig_pte;
-	unsigned int max_win, hits, prev_win, win, left;
+	unsigned int max_win, hits, prev_win, win;
 #ifndef CONFIG_64BIT
 	pte_t *tpte;
 #endif
@@ -745,16 +732,23 @@ static void swap_ra_info(struct vm_fault *vmf,
 
 	/* Copy the PTEs because the page table may be unmapped */
 	orig_pte = pte = pte_offset_map(vmf->pmd, faddr);
-	if (fpfn == pfn + 1)
-		swap_ra_clamp_pfn(vma, faddr, fpfn, fpfn + win, &start, &end);
-	else if (pfn == fpfn + 1)
-		swap_ra_clamp_pfn(vma, faddr, fpfn - win + 1, fpfn + 1,
-				  &start, &end);
-	else {
-		left = (win - 1) / 2;
-		swap_ra_clamp_pfn(vma, faddr, fpfn - left, fpfn + win - left,
-				  &start, &end);
+	if (fpfn == pfn + 1) {
+		lpfn = fpfn;
+		rpfn = fpfn + win;
+	} else if (pfn == fpfn + 1) {
+		lpfn = fpfn - win + 1;
+		rpfn = fpfn + 1;
+	} else {
+		unsigned int left = (win - 1) / 2;
+
+		lpfn = fpfn - left;
+		rpfn = fpfn + win - left;
 	}
+	start = max3(lpfn, PFN_DOWN(vma->vm_start),
+		     PFN_DOWN(faddr & PMD_MASK));
+	end = min3(rpfn, PFN_DOWN(vma->vm_end),
+		   PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
+
 	ra_info->nr_pte = end - start;
 	ra_info->offset = fpfn - start;
 	pte -= ra_info->offset;
-- 
2.35.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/5] swap: remove the swap lock in swap_cache_get_folio
  2022-12-08 18:02 [PATCH 0/5] Clean up and fixes for swap Kairui Song
                   ` (2 preceding siblings ...)
  2022-12-08 18:02 ` [PATCH 3/5] swap: fold swap_ra_clamp_pfn into swap_ra_info Kairui Song
@ 2022-12-08 18:02 ` Kairui Song
  2022-12-11 11:39   ` Huang, Ying
  2022-12-08 18:02 ` [PATCH 5/5] swap: avoid ra statistic lost when swapin races Kairui Song
  4 siblings, 1 reply; 17+ messages in thread
From: Kairui Song @ 2022-12-08 18:02 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Andrew Morton, Miaohe Lin, David Hildenbrand,
	Huang, Ying, Hugh Dickins, Kairui Song

From: Kairui Song <kasong@tencent.com>

There is only one caller not keep holding a reference or lock the
swap device while calling this function. Just move the lock out
of this function, it only used to prevent swapoff, and this helper
function is very short so there is no performance regression
issue. Help saves a few cycles.

Signed-off-by: Kairui Song <kasong@tencent.com>
---
 mm/shmem.c      | 8 +++++++-
 mm/swap_state.c | 8 ++------
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index c1d8b8a1aa3b..0183b6678270 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1725,6 +1725,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 	struct address_space *mapping = inode->i_mapping;
 	struct shmem_inode_info *info = SHMEM_I(inode);
 	struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL;
+	struct swap_info_struct *si;
 	struct folio *folio = NULL;
 	swp_entry_t swap;
 	int error;
@@ -1737,7 +1738,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 		return -EIO;
 
 	/* Look it up and read it in.. */
-	folio = swap_cache_get_folio(swap, NULL, 0);
+	si = get_swap_device(swap);
+	if (si) {
+		folio = swap_cache_get_folio(swap, NULL, 0);
+		put_swap_device(si);
+	}
+
 	if (!folio) {
 		/* Or update major stats only when swapin succeeds?? */
 		if (fault_type) {
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 19089417abd1..eba388f67741 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -324,19 +324,15 @@ static inline bool swap_use_vma_readahead(void)
  * unlocked and with its refcount incremented - we rely on the kernel
  * lock getting page table operations atomic even if we drop the folio
  * lock before returning.
+ *
+ * Caller must lock the swap device or hold a reference to keep it valid.
  */
 struct folio *swap_cache_get_folio(swp_entry_t entry,
 		struct vm_area_struct *vma, unsigned long addr)
 {
 	struct folio *folio;
-	struct swap_info_struct *si;
 
-	si = get_swap_device(entry);
-	if (!si)
-		return NULL;
 	folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry));
-	put_swap_device(si);
-
 	if (folio) {
 		bool vma_ra = swap_use_vma_readahead();
 		bool readahead;
-- 
2.35.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/5] swap: avoid ra statistic lost when swapin races
  2022-12-08 18:02 [PATCH 0/5] Clean up and fixes for swap Kairui Song
                   ` (3 preceding siblings ...)
  2022-12-08 18:02 ` [PATCH 4/5] swap: remove the swap lock in swap_cache_get_folio Kairui Song
@ 2022-12-08 18:02 ` Kairui Song
  2022-12-08 19:14   ` Matthew Wilcox
  4 siblings, 1 reply; 17+ messages in thread
From: Kairui Song @ 2022-12-08 18:02 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Andrew Morton, Miaohe Lin, David Hildenbrand,
	Huang, Ying, Hugh Dickins, Kairui Song

From: Kairui Song <kasong@tencent.com>

__read_swap_cache_async should just call swap_cache_get_folio for trying
to look up the swap cache. Because swap_cache_get_folio handles the
readahead statistic, and clears the RA flag, looking up the cache
directly will skip these parts.

And the comment no longer applies after commit 442701e7058b
("mm/swap: remove swap_cache_info statistics"), just remove them.

Fixes: 442701e7058b ("mm/swap: remove swap_cache_info statistics")
Signed-off-by: Kairui Song <kasong@tencent.com>
---
 mm/swap_state.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index eba388f67741..f39cfb62551d 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -418,15 +418,12 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 	for (;;) {
 		int err;
 		/*
-		 * First check the swap cache.  Since this is normally
-		 * called after swap_cache_get_folio() failed, re-calling
-		 * that would confuse statistics.
+		 * First check the swap cache in case of race.
 		 */
 		si = get_swap_device(entry);
 		if (!si)
 			return NULL;
-		folio = filemap_get_folio(swap_address_space(entry),
-						swp_offset(entry));
+		folio = swap_cache_get_folio(entry, vma, addr);
 		put_swap_device(si);
 		if (folio)
 			return folio_file_page(folio, swp_offset(entry));
-- 
2.35.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/5] swap: fold swap_ra_clamp_pfn into swap_ra_info
  2022-12-08 18:02 ` [PATCH 3/5] swap: fold swap_ra_clamp_pfn into swap_ra_info Kairui Song
@ 2022-12-08 19:08   ` Matthew Wilcox
  2022-12-09  2:00     ` Kairui Song
  2022-12-09  3:23   ` Huang, Ying
  1 sibling, 1 reply; 17+ messages in thread
From: Matthew Wilcox @ 2022-12-08 19:08 UTC (permalink / raw)
  To: Kairui Song
  Cc: linux-mm, linux-kernel, Andrew Morton, Miaohe Lin,
	David Hildenbrand, Huang, Ying, Hugh Dickins

On Fri, Dec 09, 2022 at 02:02:07AM +0800, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
> 
> This make the code cleaner. This helper is made of only two line of
> self explanational code and not reused anywhere else.
> 
> And this actually make the compiled object smaller by a bit:
> 
>           text    data     bss     dec     hex filename
> Before:   9502     976      12   10490    28fa mm/swap_state.o
> After:    9470     976      12   10458    28da mm/swap_state.o

FYI, you can use scripts/bloat-o-meter to get a slightly more
useful analysis of object code changes.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] swap: avoid ra statistic lost when swapin races
  2022-12-08 18:02 ` [PATCH 5/5] swap: avoid ra statistic lost when swapin races Kairui Song
@ 2022-12-08 19:14   ` Matthew Wilcox
  2022-12-09  1:54     ` Kairui Song
  0 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox @ 2022-12-08 19:14 UTC (permalink / raw)
  To: Kairui Song
  Cc: linux-mm, linux-kernel, Andrew Morton, Miaohe Lin,
	David Hildenbrand, Huang, Ying, Hugh Dickins

On Fri, Dec 09, 2022 at 02:02:09AM +0800, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
> 
> __read_swap_cache_async should just call swap_cache_get_folio for trying
> to look up the swap cache. Because swap_cache_get_folio handles the
> readahead statistic, and clears the RA flag, looking up the cache
> directly will skip these parts.
> 
> And the comment no longer applies after commit 442701e7058b
> ("mm/swap: remove swap_cache_info statistics"), just remove them.

But what about the readahead stats?

> Fixes: 442701e7058b ("mm/swap: remove swap_cache_info statistics")
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
>  mm/swap_state.c | 7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index eba388f67741..f39cfb62551d 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -418,15 +418,12 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>  	for (;;) {
>  		int err;
>  		/*
> -		 * First check the swap cache.  Since this is normally
> -		 * called after swap_cache_get_folio() failed, re-calling
> -		 * that would confuse statistics.
> +		 * First check the swap cache in case of race.
>  		 */
>  		si = get_swap_device(entry);
>  		if (!si)
>  			return NULL;
> -		folio = filemap_get_folio(swap_address_space(entry),
> -						swp_offset(entry));
> +		folio = swap_cache_get_folio(entry, vma, addr);
>  		put_swap_device(si);
>  		if (folio)
>  			return folio_file_page(folio, swp_offset(entry));
> -- 
> 2.35.2
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] swap: avoid ra statistic lost when swapin races
  2022-12-08 19:14   ` Matthew Wilcox
@ 2022-12-09  1:54     ` Kairui Song
  2022-12-11 12:02       ` Huang, Ying
  0 siblings, 1 reply; 17+ messages in thread
From: Kairui Song @ 2022-12-09  1:54 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, linux-kernel, Andrew Morton, Miaohe Lin,
	David Hildenbrand, Huang, Ying, Hugh Dickins

Matthew Wilcox <willy@infradead.org> 于2022年12月9日周五 03:14写道:
>

Hi, thanks for the review.

> On Fri, Dec 09, 2022 at 02:02:09AM +0800, Kairui Song wrote:
> > From: Kairui Song <kasong@tencent.com>
> >
> > __read_swap_cache_async should just call swap_cache_get_folio for trying
> > to look up the swap cache. Because swap_cache_get_folio handles the
> > readahead statistic, and clears the RA flag, looking up the cache
> > directly will skip these parts.
> >
> > And the comment no longer applies after commit 442701e7058b
> > ("mm/swap: remove swap_cache_info statistics"), just remove them.
>
> But what about the readahead stats?
>

Shouldn't readahead stats be accounted here? __read_swap_cache_async
is called by swap read in path, if it hits the swap cache, and the
page have readahead page flag set, then accounting that readahead
should be just the right thing todo. And the readahead flag is checked
with folio_test_clear_readahead, so there should be no issue about
repeated accounting.

Only the addr info of the swap_readahead_info could be updated for
multiple times by racing readers, but I think that seems fine, since
we don't know which swap read comes later in case of race, just let
the last reader that hits the swap cache update the address info of
readahead makes sense to me.

Or do you mean I should update the comment about the readahead stat
instead of just drop the commnet?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/5] swap: fold swap_ra_clamp_pfn into swap_ra_info
  2022-12-08 19:08   ` Matthew Wilcox
@ 2022-12-09  2:00     ` Kairui Song
  0 siblings, 0 replies; 17+ messages in thread
From: Kairui Song @ 2022-12-09  2:00 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, linux-kernel, Andrew Morton, Miaohe Lin,
	David Hildenbrand, Huang, Ying, Hugh Dickins

Matthew Wilcox <willy@infradead.org> 于2022年12月9日周五 03:09写道:
>
> On Fri, Dec 09, 2022 at 02:02:07AM +0800, Kairui Song wrote:
> > From: Kairui Song <kasong@tencent.com>
> >
> > This make the code cleaner. This helper is made of only two line of
> > self explanational code and not reused anywhere else.
> >
> > And this actually make the compiled object smaller by a bit:
> >
> >           text    data     bss     dec     hex filename
> > Before:   9502     976      12   10490    28fa mm/swap_state.o
> > After:    9470     976      12   10458    28da mm/swap_state.o
>
> FYI, you can use scripts/bloat-o-meter to get a slightly more
> useful analysis of object code changes.
>

Thanks! That's very helpful info, I got following output from bloat-o-meter:

./scripts/bloat-o-meter mm/swap_state.o.old mm/swap_state.o
add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-35 (-35)
Function                                     old     new   delta
swap_ra_info.constprop                       512     477     -35
Total: Before=8388, After=8353, chg -0.42%

I'll attach this info in commit message from now on.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/5] swapfile: get rid of volatile and avoid redundant read
  2022-12-08 18:02 ` [PATCH 1/5] swapfile: get rid of volatile and avoid redundant read Kairui Song
@ 2022-12-09  2:48   ` Huang, Ying
  0 siblings, 0 replies; 17+ messages in thread
From: Huang, Ying @ 2022-12-09  2:48 UTC (permalink / raw)
  To: Kairui Song
  Cc: linux-mm, Kairui Song, linux-kernel, Andrew Morton, Miaohe Lin,
	David Hildenbrand, Hugh Dickins

Kairui Song <ryncsn@gmail.com> writes:

> From: Kairui Song <kasong@tencent.com>
>
> Convert a volatile variable to more readable READ_ONCE. And this
> actually avoids the code from reading the variable twice redundantly
> when it races.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>

LGTM, Thanks!

Reviewed-by: "Huang, Ying" <ying.huang@intel.com>

> ---
>  mm/swapfile.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 72e481aacd5d..ff4f3cb85232 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1836,13 +1836,13 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
>  	pte_t *pte;
>  	struct swap_info_struct *si;
>  	int ret = 0;
> -	volatile unsigned char *swap_map;
>  
>  	si = swap_info[type];
>  	pte = pte_offset_map(pmd, addr);
>  	do {
>  		struct folio *folio;
>  		unsigned long offset;
> +		unsigned char swp_count;
>  
>  		if (!is_swap_pte(*pte))
>  			continue;
> @@ -1853,7 +1853,6 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
>  
>  		offset = swp_offset(entry);
>  		pte_unmap(pte);
> -		swap_map = &si->swap_map[offset];
>  		folio = swap_cache_get_folio(entry, vma, addr);
>  		if (!folio) {
>  			struct page *page;
> @@ -1870,8 +1869,10 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
>  				folio = page_folio(page);
>  		}
>  		if (!folio) {
> -			if (*swap_map == 0 || *swap_map == SWAP_MAP_BAD)
> +			swp_count = READ_ONCE(si->swap_map[offset]);
> +			if (swp_count == 0 || swp_count == SWAP_MAP_BAD)
>  				goto try_next;
> +
>  			return -ENOMEM;
>  		}

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/5] swap: avoid a redundant pte map if ra window is 1
  2022-12-08 18:02 ` [PATCH 2/5] swap: avoid a redundant pte map if ra window is 1 Kairui Song
@ 2022-12-09  3:15   ` Huang, Ying
  0 siblings, 0 replies; 17+ messages in thread
From: Huang, Ying @ 2022-12-09  3:15 UTC (permalink / raw)
  To: Kairui Song
  Cc: linux-mm, Kairui Song, linux-kernel, Andrew Morton, Miaohe Lin,
	David Hildenbrand, Hugh Dickins

Kairui Song <ryncsn@gmail.com> writes:

> From: Kairui Song <kasong@tencent.com>
>
> Avoid a redundant pte map/unmap when swap readahead window is 1.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
>  mm/swap_state.c | 7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)

Good to reduce the line of code.  Thanks!

Reviewed-by: "Huang, Ying" <ying.huang@intel.com>

> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 438d0676c5be..60136bda78e3 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -730,8 +730,6 @@ static void swap_ra_info(struct vm_fault *vmf,
>  	}
>  
>  	faddr = vmf->address;
> -	orig_pte = pte = pte_offset_map(vmf->pmd, faddr);
> -
>  	fpfn = PFN_DOWN(faddr);
>  	ra_val = GET_SWAP_RA_VAL(vma);
>  	pfn = PFN_DOWN(SWAP_RA_ADDR(ra_val));
> @@ -742,12 +740,11 @@ static void swap_ra_info(struct vm_fault *vmf,
>  	atomic_long_set(&vma->swap_readahead_info,
>  			SWAP_RA_VAL(faddr, win, 0));
>  
> -	if (win == 1) {
> -		pte_unmap(orig_pte);
> +	if (win == 1)
>  		return;
> -	}
>  
>  	/* Copy the PTEs because the page table may be unmapped */
> +	orig_pte = pte = pte_offset_map(vmf->pmd, faddr);
>  	if (fpfn == pfn + 1)
>  		swap_ra_clamp_pfn(vma, faddr, fpfn, fpfn + win, &start, &end);
>  	else if (pfn == fpfn + 1)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/5] swap: fold swap_ra_clamp_pfn into swap_ra_info
  2022-12-08 18:02 ` [PATCH 3/5] swap: fold swap_ra_clamp_pfn into swap_ra_info Kairui Song
  2022-12-08 19:08   ` Matthew Wilcox
@ 2022-12-09  3:23   ` Huang, Ying
  1 sibling, 0 replies; 17+ messages in thread
From: Huang, Ying @ 2022-12-09  3:23 UTC (permalink / raw)
  To: Kairui Song
  Cc: linux-mm, Kairui Song, linux-kernel, Andrew Morton, Miaohe Lin,
	David Hildenbrand, Hugh Dickins

Kairui Song <ryncsn@gmail.com> writes:

> From: Kairui Song <kasong@tencent.com>
>
> This make the code cleaner. This helper is made of only two line of
> self explanational code and not reused anywhere else.
>
> And this actually make the compiled object smaller by a bit:
>
>           text    data     bss     dec     hex filename
> Before:   9502     976      12   10490    28fa mm/swap_state.o
> After:    9470     976      12   10458    28da mm/swap_state.o
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
>  mm/swap_state.c | 44 +++++++++++++++++++-------------------------
>  1 file changed, 19 insertions(+), 25 deletions(-)

LGTM, Thanks!

Reviewed-by: "Huang, Ying" <ying.huang@intel.com>

> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 60136bda78e3..19089417abd1 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -696,28 +696,15 @@ void exit_swap_address_space(unsigned int type)
>  	swapper_spaces[type] = NULL;
>  }
>  
> -static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma,
> -				     unsigned long faddr,
> -				     unsigned long lpfn,
> -				     unsigned long rpfn,
> -				     unsigned long *start,
> -				     unsigned long *end)
> -{
> -	*start = max3(lpfn, PFN_DOWN(vma->vm_start),
> -		      PFN_DOWN(faddr & PMD_MASK));
> -	*end = min3(rpfn, PFN_DOWN(vma->vm_end),
> -		    PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
> -}
> -
>  static void swap_ra_info(struct vm_fault *vmf,
> -			struct vma_swap_readahead *ra_info)
> +			 struct vma_swap_readahead *ra_info)
>  {
>  	struct vm_area_struct *vma = vmf->vma;
>  	unsigned long ra_val;
> -	unsigned long faddr, pfn, fpfn;
> +	unsigned long faddr, pfn, fpfn, lpfn, rpfn;
>  	unsigned long start, end;
>  	pte_t *pte, *orig_pte;
> -	unsigned int max_win, hits, prev_win, win, left;
> +	unsigned int max_win, hits, prev_win, win;
>  #ifndef CONFIG_64BIT
>  	pte_t *tpte;
>  #endif
> @@ -745,16 +732,23 @@ static void swap_ra_info(struct vm_fault *vmf,
>  
>  	/* Copy the PTEs because the page table may be unmapped */
>  	orig_pte = pte = pte_offset_map(vmf->pmd, faddr);
> -	if (fpfn == pfn + 1)
> -		swap_ra_clamp_pfn(vma, faddr, fpfn, fpfn + win, &start, &end);
> -	else if (pfn == fpfn + 1)
> -		swap_ra_clamp_pfn(vma, faddr, fpfn - win + 1, fpfn + 1,
> -				  &start, &end);
> -	else {
> -		left = (win - 1) / 2;
> -		swap_ra_clamp_pfn(vma, faddr, fpfn - left, fpfn + win - left,
> -				  &start, &end);
> +	if (fpfn == pfn + 1) {
> +		lpfn = fpfn;
> +		rpfn = fpfn + win;
> +	} else if (pfn == fpfn + 1) {
> +		lpfn = fpfn - win + 1;
> +		rpfn = fpfn + 1;
> +	} else {
> +		unsigned int left = (win - 1) / 2;
> +
> +		lpfn = fpfn - left;
> +		rpfn = fpfn + win - left;
>  	}
> +	start = max3(lpfn, PFN_DOWN(vma->vm_start),
> +		     PFN_DOWN(faddr & PMD_MASK));
> +	end = min3(rpfn, PFN_DOWN(vma->vm_end),
> +		   PFN_DOWN((faddr & PMD_MASK) + PMD_SIZE));
> +
>  	ra_info->nr_pte = end - start;
>  	ra_info->offset = fpfn - start;
>  	pte -= ra_info->offset;

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/5] swap: remove the swap lock in swap_cache_get_folio
  2022-12-08 18:02 ` [PATCH 4/5] swap: remove the swap lock in swap_cache_get_folio Kairui Song
@ 2022-12-11 11:39   ` Huang, Ying
  2022-12-11 11:47     ` Kairui Song
  0 siblings, 1 reply; 17+ messages in thread
From: Huang, Ying @ 2022-12-11 11:39 UTC (permalink / raw)
  To: Kairui Song
  Cc: linux-mm, Kairui Song, linux-kernel, Andrew Morton, Miaohe Lin,
	David Hildenbrand, Hugh Dickins

Kairui Song <ryncsn@gmail.com> writes:

> From: Kairui Song <kasong@tencent.com>
>
> There is only one caller not keep holding a reference or lock the
> swap device while calling this function. Just move the lock out
> of this function, it only used to prevent swapoff, and this helper
> function is very short so there is no performance regression
> issue. Help saves a few cycles.

> Subject: Re: [PATCH 4/5] swap: remove the swap lock in swap_cache_get_folio

I don't think you remove `swap lock` in swap_cache_get_folio().  Just
avoid to inc/dec the reference count.

And I think it's better to add '()' after swap_cache_get_folio to make
it clear it's a function.

> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
>  mm/shmem.c      | 8 +++++++-
>  mm/swap_state.c | 8 ++------
>  2 files changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index c1d8b8a1aa3b..0183b6678270 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1725,6 +1725,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
>  	struct address_space *mapping = inode->i_mapping;
>  	struct shmem_inode_info *info = SHMEM_I(inode);
>  	struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL;
> +	struct swap_info_struct *si;
>  	struct folio *folio = NULL;
>  	swp_entry_t swap;
>  	int error;
> @@ -1737,7 +1738,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
>  		return -EIO;
>  
>  	/* Look it up and read it in.. */
> -	folio = swap_cache_get_folio(swap, NULL, 0);
> +	si = get_swap_device(swap);
> +	if (si) {
> +		folio = swap_cache_get_folio(swap, NULL, 0);
> +		put_swap_device(si);

I'd rather to call put_swap_device() at the end of function.  That is,
whenever we get a swap entry without proper lock/reference to prevent
swapoff, we should call get_swap_device() to check its validity and
prevent the swap device from swapoff.

Best Regards,
Huang, Ying

> +	}
> +
>  	if (!folio) {
>  		/* Or update major stats only when swapin succeeds?? */
>  		if (fault_type) {
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 19089417abd1..eba388f67741 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -324,19 +324,15 @@ static inline bool swap_use_vma_readahead(void)
>   * unlocked and with its refcount incremented - we rely on the kernel
>   * lock getting page table operations atomic even if we drop the folio
>   * lock before returning.
> + *
> + * Caller must lock the swap device or hold a reference to keep it valid.
>   */
>  struct folio *swap_cache_get_folio(swp_entry_t entry,
>  		struct vm_area_struct *vma, unsigned long addr)
>  {
>  	struct folio *folio;
> -	struct swap_info_struct *si;
>  
> -	si = get_swap_device(entry);
> -	if (!si)
> -		return NULL;
>  	folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry));
> -	put_swap_device(si);
> -
>  	if (folio) {
>  		bool vma_ra = swap_use_vma_readahead();
>  		bool readahead;

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/5] swap: remove the swap lock in swap_cache_get_folio
  2022-12-11 11:39   ` Huang, Ying
@ 2022-12-11 11:47     ` Kairui Song
  0 siblings, 0 replies; 17+ messages in thread
From: Kairui Song @ 2022-12-11 11:47 UTC (permalink / raw)
  To: Huang, Ying
  Cc: linux-mm, linux-kernel, Andrew Morton, Miaohe Lin,
	David Hildenbrand, Hugh Dickins

Huang, Ying <ying.huang@intel.com> 于2022年12月11日周日 19:40写道:
>
> Kairui Song <ryncsn@gmail.com> writes:
>
> > From: Kairui Song <kasong@tencent.com>
> >
> > There is only one caller not keep holding a reference or lock the
> > swap device while calling this function. Just move the lock out
> > of this function, it only used to prevent swapoff, and this helper
> > function is very short so there is no performance regression
> > issue. Help saves a few cycles.
>
> > Subject: Re: [PATCH 4/5] swap: remove the swap lock in swap_cache_get_folio
>
> I don't think you remove `swap lock` in swap_cache_get_folio().  Just
> avoid to inc/dec the reference count.

Yes, that's more accurate, it's kind of like 'locked the swap device
from being swapped off', so I used some inaccurate word. I'll correct
this in V2.

>
> And I think it's better to add '()' after swap_cache_get_folio to make
> it clear it's a function.

Good suggestion.

>
> > Signed-off-by: Kairui Song <kasong@tencent.com>
> > ---
> >  mm/shmem.c      | 8 +++++++-
> >  mm/swap_state.c | 8 ++------
> >  2 files changed, 9 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index c1d8b8a1aa3b..0183b6678270 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -1725,6 +1725,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> >       struct address_space *mapping = inode->i_mapping;
> >       struct shmem_inode_info *info = SHMEM_I(inode);
> >       struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL;
> > +     struct swap_info_struct *si;
> >       struct folio *folio = NULL;
> >       swp_entry_t swap;
> >       int error;
> > @@ -1737,7 +1738,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> >               return -EIO;
> >
> >       /* Look it up and read it in.. */
> > -     folio = swap_cache_get_folio(swap, NULL, 0);
> > +     si = get_swap_device(swap);
> > +     if (si) {
> > +             folio = swap_cache_get_folio(swap, NULL, 0);
> > +             put_swap_device(si);
>
> I'd rather to call put_swap_device() at the end of function.  That is,
> whenever we get a swap entry without proper lock/reference to prevent
> swapoff, we should call get_swap_device() to check its validity and
> prevent the swap device from swapoff.

Yes, that's the right way to do it, my code is buggy here, sorry for
being so careless, I'll fix it.

>
> Best Regards,
> Huang, Ying
>
> > +     }
> > +
> >       if (!folio) {
> >               /* Or update major stats only when swapin succeeds?? */
> >               if (fault_type) {
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index 19089417abd1..eba388f67741 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -324,19 +324,15 @@ static inline bool swap_use_vma_readahead(void)
> >   * unlocked and with its refcount incremented - we rely on the kernel
> >   * lock getting page table operations atomic even if we drop the folio
> >   * lock before returning.
> > + *
> > + * Caller must lock the swap device or hold a reference to keep it valid.
> >   */
> >  struct folio *swap_cache_get_folio(swp_entry_t entry,
> >               struct vm_area_struct *vma, unsigned long addr)
> >  {
> >       struct folio *folio;
> > -     struct swap_info_struct *si;
> >
> > -     si = get_swap_device(entry);
> > -     if (!si)
> > -             return NULL;
> >       folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry));
> > -     put_swap_device(si);
> > -
> >       if (folio) {
> >               bool vma_ra = swap_use_vma_readahead();
> >               bool readahead;

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] swap: avoid ra statistic lost when swapin races
  2022-12-09  1:54     ` Kairui Song
@ 2022-12-11 12:02       ` Huang, Ying
  2022-12-11 12:15         ` Kairui Song
  0 siblings, 1 reply; 17+ messages in thread
From: Huang, Ying @ 2022-12-11 12:02 UTC (permalink / raw)
  To: Kairui Song
  Cc: Matthew Wilcox, linux-mm, linux-kernel, Andrew Morton,
	Miaohe Lin, David Hildenbrand, Hugh Dickins

Kairui Song <ryncsn@gmail.com> writes:

> Matthew Wilcox <willy@infradead.org> 于2022年12月9日周五 03:14写道:
>>
>
> Hi, thanks for the review.
>
>> On Fri, Dec 09, 2022 at 02:02:09AM +0800, Kairui Song wrote:
>> > From: Kairui Song <kasong@tencent.com>
>> >
>> > __read_swap_cache_async should just call swap_cache_get_folio for trying
>> > to look up the swap cache. Because swap_cache_get_folio handles the
>> > readahead statistic, and clears the RA flag, looking up the cache
>> > directly will skip these parts.
>> >
>> > And the comment no longer applies after commit 442701e7058b
>> > ("mm/swap: remove swap_cache_info statistics"), just remove them.
>>
>> But what about the readahead stats?
>>
>
> Shouldn't readahead stats be accounted here? __read_swap_cache_async
> is called by swap read in path, if it hits the swap cache, and the
> page have readahead page flag set, then accounting that readahead
> should be just the right thing todo. And the readahead flag is checked
> with folio_test_clear_readahead, so there should be no issue about
> repeated accounting.
>
> Only the addr info of the swap_readahead_info could be updated for
> multiple times by racing readers, but I think that seems fine, since
> we don't know which swap read comes later in case of race, just let
> the last reader that hits the swap cache update the address info of
> readahead makes sense to me.
>
> Or do you mean I should update the comment about the readahead stat
> instead of just drop the commnet?

__read_swap_cache_async() is called by readahead too
(swap_vma_readahead/__read_swap_cache_async).  I don't think that it's a
good idea to do swap readahead operation in this function.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/5] swap: avoid ra statistic lost when swapin races
  2022-12-11 12:02       ` Huang, Ying
@ 2022-12-11 12:15         ` Kairui Song
  0 siblings, 0 replies; 17+ messages in thread
From: Kairui Song @ 2022-12-11 12:15 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Matthew Wilcox, linux-mm, linux-kernel, Andrew Morton,
	Miaohe Lin, David Hildenbrand, Hugh Dickins

Huang, Ying <ying.huang@intel.com> 于2022年12月11日周日 20:03写道:
>
> Kairui Song <ryncsn@gmail.com> writes:
>
> > Matthew Wilcox <willy@infradead.org> 于2022年12月9日周五 03:14写道:
> >>
> >
> > Hi, thanks for the review.
> >
> >> On Fri, Dec 09, 2022 at 02:02:09AM +0800, Kairui Song wrote:
> >> > From: Kairui Song <kasong@tencent.com>
> >> >
> >> > __read_swap_cache_async should just call swap_cache_get_folio for trying
> >> > to look up the swap cache. Because swap_cache_get_folio handles the
> >> > readahead statistic, and clears the RA flag, looking up the cache
> >> > directly will skip these parts.
> >> >
> >> > And the comment no longer applies after commit 442701e7058b
> >> > ("mm/swap: remove swap_cache_info statistics"), just remove them.
> >>
> >> But what about the readahead stats?
> >>
> >
> > Shouldn't readahead stats be accounted here? __read_swap_cache_async
> > is called by swap read in path, if it hits the swap cache, and the
> > page have readahead page flag set, then accounting that readahead
> > should be just the right thing todo. And the readahead flag is checked
> > with folio_test_clear_readahead, so there should be no issue about
> > repeated accounting.
> >
> > Only the addr info of the swap_readahead_info could be updated for
> > multiple times by racing readers, but I think that seems fine, since
> > we don't know which swap read comes later in case of race, just let
> > the last reader that hits the swap cache update the address info of
> > readahead makes sense to me.
> >
> > Or do you mean I should update the comment about the readahead stat
> > instead of just drop the commnet?
>
> __read_swap_cache_async() is called by readahead too
> (swap_vma_readahead/__read_swap_cache_async).  I don't think that it's a
> good idea to do swap readahead operation in this function.

Ah, I got it.
Thanks for pointing out the issue, I'll drop this patch.

>
> Best Regards,
> Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-12-11 12:17 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-08 18:02 [PATCH 0/5] Clean up and fixes for swap Kairui Song
2022-12-08 18:02 ` [PATCH 1/5] swapfile: get rid of volatile and avoid redundant read Kairui Song
2022-12-09  2:48   ` Huang, Ying
2022-12-08 18:02 ` [PATCH 2/5] swap: avoid a redundant pte map if ra window is 1 Kairui Song
2022-12-09  3:15   ` Huang, Ying
2022-12-08 18:02 ` [PATCH 3/5] swap: fold swap_ra_clamp_pfn into swap_ra_info Kairui Song
2022-12-08 19:08   ` Matthew Wilcox
2022-12-09  2:00     ` Kairui Song
2022-12-09  3:23   ` Huang, Ying
2022-12-08 18:02 ` [PATCH 4/5] swap: remove the swap lock in swap_cache_get_folio Kairui Song
2022-12-11 11:39   ` Huang, Ying
2022-12-11 11:47     ` Kairui Song
2022-12-08 18:02 ` [PATCH 5/5] swap: avoid ra statistic lost when swapin races Kairui Song
2022-12-08 19:14   ` Matthew Wilcox
2022-12-09  1:54     ` Kairui Song
2022-12-11 12:02       ` Huang, Ying
2022-12-11 12:15         ` Kairui Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).