linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 1/1] mm:improve the performance during fork
@ 2021-03-29 12:36 qianjun.kernel
  2021-03-31  5:44 ` Andrew Morton
  0 siblings, 1 reply; 5+ messages in thread
From: qianjun.kernel @ 2021-03-29 12:36 UTC (permalink / raw)
  To: akpm, ast, daniel, kafai, songliubraving, yhs, andriin,
	john.fastabend, kpsingh, linux-mm
  Cc: linux-kernel, netdev, bpf, jun qian

From: jun qian <qianjun.kernel@gmail.com>

In our project, Many business delays come from fork, so
we started looking for the reason why fork is time-consuming.
I used the ftrace with function_graph to trace the fork, found
that the vm_normal_page will be called tens of thousands and
the execution time of this vm_normal_page function is only a
few nanoseconds. And the vm_normal_page is not a inline function.
So I think if the function is inline style, it maybe reduce the
call time overhead.

I did the following experiment:

use the bpftrace tool to trace the fork time :

bpftrace -e 'kprobe:_do_fork/comm=="redis-server"/ {@st=nsecs;} \
kretprobe:_do_fork /comm=="redis-server"/{printf("the fork time \
is %d us\n", (nsecs-@st)/1000)}'

no inline vm_normal_page:
result:
the fork time is 40743 us
the fork time is 41746 us
the fork time is 41336 us
the fork time is 42417 us
the fork time is 40612 us
the fork time is 40930 us
the fork time is 41910 us

inline vm_normal_page:
result:
the fork time is 39276 us
the fork time is 38974 us
the fork time is 39436 us
the fork time is 38815 us
the fork time is 39878 us
the fork time is 39176 us

In the same test environment, we can get 3% to 4% of
performance improvement.

note:the test data is from the 4.18.0-193.6.3.el8_2.v1.1.x86_64,
because my product use this version kernel to test the redis
server, If you need to compare the latest version of the kernel
test data, you can refer to the version 1 Patch.

We need to compare the changes in the size of vmlinux:
                  inline           non-inline       diff
vmlinux size      9709248 bytes    9709824 bytes    -576 bytes

Signed-off-by: jun qian <qianjun.kernel@gmail.com>
---
 mm/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index eeae590e526a..6ade9748d425 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -592,7 +592,7 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
  * PFNMAP mappings in order to support COWable mappings.
  *
  */
-struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+inline struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 			    pte_t pte)
 {
 	unsigned long pfn = pte_pfn(pte);
-- 
2.18.2



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH V2 1/1] mm:improve the performance during fork
  2021-03-29 12:36 [PATCH V2 1/1] mm:improve the performance during fork qianjun.kernel
@ 2021-03-31  5:44 ` Andrew Morton
  2021-03-31 12:11   ` Vlastimil Babka
  2021-04-06  2:14   ` jun qian
  0 siblings, 2 replies; 5+ messages in thread
From: Andrew Morton @ 2021-03-31  5:44 UTC (permalink / raw)
  To: qianjun.kernel
  Cc: ast, daniel, kafai, songliubraving, yhs, andriin, john.fastabend,
	kpsingh, linux-mm, linux-kernel, netdev, bpf

On Mon, 29 Mar 2021 20:36:35 +0800 qianjun.kernel@gmail.com wrote:

> From: jun qian <qianjun.kernel@gmail.com>
> 
> In our project, Many business delays come from fork, so
> we started looking for the reason why fork is time-consuming.
> I used the ftrace with function_graph to trace the fork, found
> that the vm_normal_page will be called tens of thousands and
> the execution time of this vm_normal_page function is only a
> few nanoseconds. And the vm_normal_page is not a inline function.
> So I think if the function is inline style, it maybe reduce the
> call time overhead.
> 
> I did the following experiment:
> 
> use the bpftrace tool to trace the fork time :
> 
> bpftrace -e 'kprobe:_do_fork/comm=="redis-server"/ {@st=nsecs;} \
> kretprobe:_do_fork /comm=="redis-server"/{printf("the fork time \
> is %d us\n", (nsecs-@st)/1000)}'
> 
> no inline vm_normal_page:
> result:
> the fork time is 40743 us
> the fork time is 41746 us
> the fork time is 41336 us
> the fork time is 42417 us
> the fork time is 40612 us
> the fork time is 40930 us
> the fork time is 41910 us
> 
> inline vm_normal_page:
> result:
> the fork time is 39276 us
> the fork time is 38974 us
> the fork time is 39436 us
> the fork time is 38815 us
> the fork time is 39878 us
> the fork time is 39176 us
> 
> In the same test environment, we can get 3% to 4% of
> performance improvement.
> 
> note:the test data is from the 4.18.0-193.6.3.el8_2.v1.1.x86_64,
> because my product use this version kernel to test the redis
> server, If you need to compare the latest version of the kernel
> test data, you can refer to the version 1 Patch.
> 
> We need to compare the changes in the size of vmlinux:
>                   inline           non-inline       diff
> vmlinux size      9709248 bytes    9709824 bytes    -576 bytes
> 

I get very different results with gcc-7.2.0:

q:/usr/src/25> size mm/memory.o
   text    data     bss     dec     hex filename
  74898    3375      64   78337   13201 mm/memory.o-before
  75119    3363      64   78546   132d2 mm/memory.o-after

That's a somewhat significant increase in code size, and larger code
size has a worsened cache footprint.

Not that this is necessarily a bad thing for a function which is
tightly called many times in succession as is vm__normal_page()

> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -592,7 +592,7 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
>   * PFNMAP mappings in order to support COWable mappings.
>   *
>   */
> -struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> +inline struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>  			    pte_t pte)
>  {
>  	unsigned long pfn = pte_pfn(pte);

I'm a bit surprised this made any difference - rumour has it that
modern gcc just ignores `inline' and makes up its own mind.  Which is
why we added __always_inline.



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH V2 1/1] mm:improve the performance during fork
  2021-03-31  5:44 ` Andrew Morton
@ 2021-03-31 12:11   ` Vlastimil Babka
  2021-03-31 14:42     ` Vlastimil Babka
  2021-04-06  2:14   ` jun qian
  1 sibling, 1 reply; 5+ messages in thread
From: Vlastimil Babka @ 2021-03-31 12:11 UTC (permalink / raw)
  To: Andrew Morton, qianjun.kernel
  Cc: ast, daniel, kafai, songliubraving, yhs, andriin, john.fastabend,
	kpsingh, linux-mm, linux-kernel, netdev, bpf

On 3/31/21 7:44 AM, Andrew Morton wrote:
> On Mon, 29 Mar 2021 20:36:35 +0800 qianjun.kernel@gmail.com wrote:
> 
>> From: jun qian <qianjun.kernel@gmail.com>
>> 
>> In our project, Many business delays come from fork, so
>> we started looking for the reason why fork is time-consuming.
>> I used the ftrace with function_graph to trace the fork, found
>> that the vm_normal_page will be called tens of thousands and
>> the execution time of this vm_normal_page function is only a
>> few nanoseconds. And the vm_normal_page is not a inline function.
>> So I think if the function is inline style, it maybe reduce the
>> call time overhead.
>> 
>> I did the following experiment:
>> 
>> use the bpftrace tool to trace the fork time :
>> 
>> bpftrace -e 'kprobe:_do_fork/comm=="redis-server"/ {@st=nsecs;} \
>> kretprobe:_do_fork /comm=="redis-server"/{printf("the fork time \
>> is %d us\n", (nsecs-@st)/1000)}'
>> 
>> no inline vm_normal_page:
>> result:
>> the fork time is 40743 us
>> the fork time is 41746 us
>> the fork time is 41336 us
>> the fork time is 42417 us
>> the fork time is 40612 us
>> the fork time is 40930 us
>> the fork time is 41910 us
>> 
>> inline vm_normal_page:
>> result:
>> the fork time is 39276 us
>> the fork time is 38974 us
>> the fork time is 39436 us
>> the fork time is 38815 us
>> the fork time is 39878 us
>> the fork time is 39176 us
>> 
>> In the same test environment, we can get 3% to 4% of
>> performance improvement.
>> 
>> note:the test data is from the 4.18.0-193.6.3.el8_2.v1.1.x86_64,
>> because my product use this version kernel to test the redis
>> server, If you need to compare the latest version of the kernel
>> test data, you can refer to the version 1 Patch.
>> 
>> We need to compare the changes in the size of vmlinux:
>>                   inline           non-inline       diff
>> vmlinux size      9709248 bytes    9709824 bytes    -576 bytes
>> 
> 
> I get very different results with gcc-7.2.0:
> 
> q:/usr/src/25> size mm/memory.o
>    text    data     bss     dec     hex filename
>   74898    3375      64   78337   13201 mm/memory.o-before
>   75119    3363      64   78546   132d2 mm/memory.o-after

I got this:

./scripts/bloat-o-meter memory.o.before mm/memory.o
add/remove: 0/0 grow/shrink: 1/3 up/down: 285/-86 (199)
Function                                     old     new   delta
copy_pte_range                              2095    2380    +285
vm_normal_page                               168     163      -5
do_anonymous_page                           1039    1003     -36
do_swap_page                                1835    1790     -45
Total: Before=42411, After=42610, chg +0.47%


> That's a somewhat significant increase in code size, and larger code
> size has a worsened cache footprint.
> 
> Not that this is necessarily a bad thing for a function which is
> tightly called many times in succession as is vm__normal_page()

Hm but the inline only affects the users within mm/memory.c, unless the kernel
is built with link time optimization (LTO), which is not AFAIK not the standard yet.

>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -592,7 +592,7 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
>>   * PFNMAP mappings in order to support COWable mappings.
>>   *
>>   */
>> -struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>> +inline struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>>  			    pte_t pte)
>>  {
>>  	unsigned long pfn = pte_pfn(pte);
> 
> I'm a bit surprised this made any difference - rumour has it that
> modern gcc just ignores `inline' and makes up its own mind.  Which is
> why we added __always_inline.

AFAIK it doesn't completely ignore it, just takes it as a hint in addition to
its own heuristics. So adding the keyword might flip the decision to inline in
some cases, but is not guaranteed to.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH V2 1/1] mm:improve the performance during fork
  2021-03-31 12:11   ` Vlastimil Babka
@ 2021-03-31 14:42     ` Vlastimil Babka
  0 siblings, 0 replies; 5+ messages in thread
From: Vlastimil Babka @ 2021-03-31 14:42 UTC (permalink / raw)
  To: Andrew Morton, qianjun.kernel
  Cc: ast, daniel, kafai, songliubraving, yhs, andriin, john.fastabend,
	kpsingh, linux-mm, linux-kernel, netdev, bpf

On 3/31/21 2:11 PM, Vlastimil Babka wrote:
> On 3/31/21 7:44 AM, Andrew Morton wrote:
>> On Mon, 29 Mar 2021 20:36:35 +0800 qianjun.kernel@gmail.com wrote:
>> 
>>> From: jun qian <qianjun.kernel@gmail.com>
>>> 
>>> In our project, Many business delays come from fork, so
>>> we started looking for the reason why fork is time-consuming.
>>> I used the ftrace with function_graph to trace the fork, found
>>> that the vm_normal_page will be called tens of thousands and
>>> the execution time of this vm_normal_page function is only a
>>> few nanoseconds. And the vm_normal_page is not a inline function.
>>> So I think if the function is inline style, it maybe reduce the
>>> call time overhead.
>>> 
>>> I did the following experiment:
>>> 
>>> use the bpftrace tool to trace the fork time :
>>> 
>>> bpftrace -e 'kprobe:_do_fork/comm=="redis-server"/ {@st=nsecs;} \
>>> kretprobe:_do_fork /comm=="redis-server"/{printf("the fork time \
>>> is %d us\n", (nsecs-@st)/1000)}'
>>> 
>>> no inline vm_normal_page:
>>> result:
>>> the fork time is 40743 us
>>> the fork time is 41746 us
>>> the fork time is 41336 us
>>> the fork time is 42417 us
>>> the fork time is 40612 us
>>> the fork time is 40930 us
>>> the fork time is 41910 us
>>> 
>>> inline vm_normal_page:
>>> result:
>>> the fork time is 39276 us
>>> the fork time is 38974 us
>>> the fork time is 39436 us
>>> the fork time is 38815 us
>>> the fork time is 39878 us
>>> the fork time is 39176 us
>>> 
>>> In the same test environment, we can get 3% to 4% of
>>> performance improvement.
>>> 
>>> note:the test data is from the 4.18.0-193.6.3.el8_2.v1.1.x86_64,
>>> because my product use this version kernel to test the redis
>>> server, If you need to compare the latest version of the kernel
>>> test data, you can refer to the version 1 Patch.
>>> 
>>> We need to compare the changes in the size of vmlinux:
>>>                   inline           non-inline       diff
>>> vmlinux size      9709248 bytes    9709824 bytes    -576 bytes
>>> 
>> 
>> I get very different results with gcc-7.2.0:
>> 
>> q:/usr/src/25> size mm/memory.o
>>    text    data     bss     dec     hex filename
>>   74898    3375      64   78337   13201 mm/memory.o-before
>>   75119    3363      64   78546   132d2 mm/memory.o-after
> 
> I got this:
> 
> ./scripts/bloat-o-meter memory.o.before mm/memory.o
> add/remove: 0/0 grow/shrink: 1/3 up/down: 285/-86 (199)
> Function                                     old     new   delta
> copy_pte_range                              2095    2380    +285
> vm_normal_page                               168     163      -5
> do_anonymous_page                           1039    1003     -36
> do_swap_page                                1835    1790     -45
> Total: Before=42411, After=42610, chg +0.47%
> 
> 
>> That's a somewhat significant increase in code size, and larger code
>> size has a worsened cache footprint.
>> 
>> Not that this is necessarily a bad thing for a function which is
>> tightly called many times in succession as is vm__normal_page()
> 
> Hm but the inline only affects the users within mm/memory.c, unless the kernel
> is built with link time optimization (LTO), which is not AFAIK not the standard yet.

So I tried to inline the fast path of vm_normal_page() for every caller, see
below. The difference is only on architectures with CONFIG_ARCH_HAS_PTE_SPECIAL
where the fast path doesn't even need to look at vma flags. Of course inlining
has size costs, but there might be performance benefits, so you might want to
try measuring if it's worth it and I should make it a formal patch.

It might be also even better if we give up on the highest_memmap_pfn check or
make it CONFIG_DEBUG_VM only.

> ./scripts/bloat-o-meter vmlinux.before vmlinux.after 
add/remove: 1/2 grow/shrink: 27/3 up/down: 2796/-479 (2317)
Function                                     old     new   delta
collapse_pte_mapped_thp                     1141    1364    +223
khugepaged_scan_pmd                         1532    1738    +206
__vm_normal_page                               -     168    +168
pagemap_pmd_range                           1679    1835    +156
__collapse_huge_page_isolate                1485    1628    +143
follow_page_pte                             1454    1596    +142
queue_pages_pte_range                        774     906    +132
clear_refs_pte_range                         944    1061    +117
do_numa_page                                 643     758    +115
__munlock_pagevec_fill                       438     551    +113
do_wp_page                                   567     676    +109
copy_pte_range                              1953    2055    +102
madvise_free_pte_range                      2053    2154    +101
gather_pte_stats                             693     793    +100
zap_pte_range                               1914    2000     +86
mc_handle_present_pte                        144     230     +86
smaps_hugetlb_range                          322     407     +85
clear_soft_dirty                             273     357     +84
get_gate_page                                668     751     +83
madvise_cold_or_pageout_pte_range           2885    2966     +81
get_mctgt_type                               576     657     +81
change_pte_range                            1300    1376     +76
smaps_pte_entry.isra                         455     513     +58
wp_page_copy                                1647    1696     +49
pud_offset.isra                              126     167     +41
do_swap_page                                1754    1790     +36
free_pud_range                              1050    1065     +15
arch_local_irq_enable                        136     144      +8
__pmd_alloc                                  670     668      -2
__handle_mm_fault                           1754    1738     -16
mem_cgroup_move_charge_pte_range            1661    1535    -126
mc_handle_swap_pte.constprop                 167       -    -167
vm_normal_page                               168       -    -168
Total: Before=29768466, After=29770783, chg +0.01%

----8<----
From f70bda6fbb7f17e13f3fa88fac203b7f426d0752 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka@suse.cz>
Date: Wed, 31 Mar 2021 15:59:15 +0200
Subject: [PATCH] inline vm_normal_page()

---
 include/linux/mm.h | 18 +++++++++++++++++-
 mm/internal.h      |  2 --
 mm/memory.c        |  2 +-
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3e4dc6678eb2..1df6ce4ab668 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1711,8 +1711,24 @@ struct zap_details {
 	pgoff_t last_index;			/* Highest page->index to unmap */
 };
 
-struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 			     pte_t pte);
+extern unsigned long highest_memmap_pfn;
+static inline struct page *vm_normal_page(struct vm_area_struct *vma,
+					  unsigned long addr, pte_t pte)
+{
+	unsigned long pfn;
+
+	if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)
+	    && likely(!pte_special(pte))) {
+		pfn = pte_pfn(pte);
+		if (likely(pfn <= highest_memmap_pfn))
+			return pfn_to_page(pfn);
+	}
+
+	return __vm_normal_page(vma, addr, pte);
+}
+
 struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
 				pmd_t pmd);
 
diff --git a/mm/internal.h b/mm/internal.h
index 547a8d7f0cbb..cca1cbc3f6fa 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -117,8 +117,6 @@ static inline bool is_page_poisoned(struct page *page)
 	return false;
 }
 
-extern unsigned long highest_memmap_pfn;
-
 /*
  * Maximum number of reclaim retries without progress before the OOM
  * killer is consider the only way forward.
diff --git a/mm/memory.c b/mm/memory.c
index 5c3b29d3af66..d801914cfce4 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -603,7 +603,7 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
  * PFNMAP mappings in order to support COWable mappings.
  *
  */
-struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+struct page *__vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 			    pte_t pte)
 {
 	unsigned long pfn = pte_pfn(pte);
-- 
2.30.2






^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH V2 1/1] mm:improve the performance during fork
  2021-03-31  5:44 ` Andrew Morton
  2021-03-31 12:11   ` Vlastimil Babka
@ 2021-04-06  2:14   ` jun qian
  1 sibling, 0 replies; 5+ messages in thread
From: jun qian @ 2021-04-06  2:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: ast, daniel, kafai, songliubraving, yhs, andriin, john.fastabend,
	kpsingh, Linux-MM, linux-kernel, netdev, bpf

Andrew Morton <akpm@linux-foundation.org> 于2021年3月31日周三 下午1:44写道:
>
> On Mon, 29 Mar 2021 20:36:35 +0800 qianjun.kernel@gmail.com wrote:
>
> > From: jun qian <qianjun.kernel@gmail.com>
> >
> > In our project, Many business delays come from fork, so
> > we started looking for the reason why fork is time-consuming.
> > I used the ftrace with function_graph to trace the fork, found
> > that the vm_normal_page will be called tens of thousands and
> > the execution time of this vm_normal_page function is only a
> > few nanoseconds. And the vm_normal_page is not a inline function.
> > So I think if the function is inline style, it maybe reduce the
> > call time overhead.
> >
> > I did the following experiment:
> >
> > use the bpftrace tool to trace the fork time :
> >
> > bpftrace -e 'kprobe:_do_fork/comm=="redis-server"/ {@st=nsecs;} \
> > kretprobe:_do_fork /comm=="redis-server"/{printf("the fork time \
> > is %d us\n", (nsecs-@st)/1000)}'
> >
> > no inline vm_normal_page:
> > result:
> > the fork time is 40743 us
> > the fork time is 41746 us
> > the fork time is 41336 us
> > the fork time is 42417 us
> > the fork time is 40612 us
> > the fork time is 40930 us
> > the fork time is 41910 us
> >
> > inline vm_normal_page:
> > result:
> > the fork time is 39276 us
> > the fork time is 38974 us
> > the fork time is 39436 us
> > the fork time is 38815 us
> > the fork time is 39878 us
> > the fork time is 39176 us
> >
> > In the same test environment, we can get 3% to 4% of
> > performance improvement.
> >
> > note:the test data is from the 4.18.0-193.6.3.el8_2.v1.1.x86_64,
> > because my product use this version kernel to test the redis
> > server, If you need to compare the latest version of the kernel
> > test data, you can refer to the version 1 Patch.
> >
> > We need to compare the changes in the size of vmlinux:
> >                   inline           non-inline       diff
> > vmlinux size      9709248 bytes    9709824 bytes    -576 bytes
> >
>
> I get very different results with gcc-7.2.0:
>
> q:/usr/src/25> size mm/memory.o
>    text    data     bss     dec     hex filename
>   74898    3375      64   78337   13201 mm/memory.o-before
>   75119    3363      64   78546   132d2 mm/memory.o-after
>
> That's a somewhat significant increase in code size, and larger code
> size has a worsened cache footprint.
>
> Not that this is necessarily a bad thing for a function which is
> tightly called many times in succession as is vm__normal_page()
>
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -592,7 +592,7 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
> >   * PFNMAP mappings in order to support COWable mappings.
> >   *
> >   */
> > -struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> > +inline struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> >                           pte_t pte)
> >  {
> >       unsigned long pfn = pte_pfn(pte);
>
> I'm a bit surprised this made any difference - rumour has it that
> modern gcc just ignores `inline' and makes up its own mind.  Which is
> why we added __always_inline.
>
the kernel code version: kernel-4.18.0-193.6.3.el8_2
gcc version 8.4.1 20200928 (Red Hat 8.4.1-1) (GCC)

and I made it again, got the results, and later i will test in the
latest version kernel with the new gcc.

757368576  vmlinux   inline
757381440  vmlinux   no inline


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-04-06  2:14 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-29 12:36 [PATCH V2 1/1] mm:improve the performance during fork qianjun.kernel
2021-03-31  5:44 ` Andrew Morton
2021-03-31 12:11   ` Vlastimil Babka
2021-03-31 14:42     ` Vlastimil Babka
2021-04-06  2:14   ` jun qian

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).