linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] mm/vmscan: add a fatal signals check in drop_slab_node
@ 2020-09-15 11:40 zangchunxin
  2020-09-15 12:13 ` Michal Hocko
  0 siblings, 1 reply; 3+ messages in thread
From: zangchunxin @ 2020-09-15 11:40 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, Chunxin Zang, Muchun Song

From: Chunxin Zang <zangchunxin@bytedance.com>

On our server, there are about 10k memcg in one machine. They use memory
very frequently. We have observed that drop_caches can take a
considerable amount of time, and can't stop it.

There are two reasons:
1. There is somebody constantly generating more objects to reclaim
   on drop_caches, result the 'freed' always bigger than 10.
2. The process has no chance to process signals.

We can get the following info through 'ps':

  root:~# ps -aux | grep drop
  root  357956 ... R    Aug25 21119854:55 echo 3 > /proc/sys/vm/drop_caches
  root 1771385 ... R    Aug16 21146421:17 echo 3 > /proc/sys/vm/drop_caches
  root 1986319 ... R    18:56 117:27 echo 3 > /proc/sys/vm/drop_caches
  root 2002148 ... R    Aug24 5720:39 echo 3 > /proc/sys/vm/drop_caches
  root 2564666 ... R    18:59 113:58 echo 3 > /proc/sys/vm/drop_caches
  root 2639347 ... R    Sep03 2383:39 echo 3 > /proc/sys/vm/drop_caches
  root 3904747 ... R    03:35 993:31 echo 3 > /proc/sys/vm/drop_caches
  root 4016780 ... R    Aug21 7882:18 echo 3 > /proc/sys/vm/drop_caches

Use bpftrace follow 'freed' value in drop_slab_node:

  root:~# bpftrace -e 'kprobe:drop_slab_node+70 {@ret=hist(reg("bp")); }'
  Attaching 1 probe...
  ^B^C

  @ret:
  [64, 128)        1 |                                                    |
  [128, 256)      28 |                                                    |
  [256, 512)     107 |@                                                   |
  [512, 1K)      298 |@@@                                                 |
  [1K, 2K)       613 |@@@@@@@                                             |
  [2K, 4K)      4435 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
  [4K, 8K)       442 |@@@@@                                               |
  [8K, 16K)      299 |@@@                                                 |
  [16K, 32K)     100 |@                                                   |
  [32K, 64K)     139 |@                                                   |
  [64K, 128K)     56 |                                                    |
  [128K, 256K)    26 |                                                    |
  [256K, 512K)     2 |                                                    |

We need one path to stop the process.

Signed-off-by: Chunxin Zang <zangchunxin@bytedance.com>
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---

	changelogs in v3: 
	1) update the description of the patch.
		v2 named: mm/vmscan: fix infinite loop in drop_slab_node

	changelogs in v2: 
	1) via check fatal signal break loop.

 mm/vmscan.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index b6d84326bdf2..6b2b5d420510 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -704,6 +704,9 @@ void drop_slab_node(int nid)
 	do {
 		struct mem_cgroup *memcg = NULL;
 
+		if (signal_pending(current))
+			return;
+
 		freed = 0;
 		memcg = mem_cgroup_iter(NULL, NULL, NULL);
 		do {
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v3] mm/vmscan: add a fatal signals check in drop_slab_node
  2020-09-15 11:40 [PATCH v3] mm/vmscan: add a fatal signals check in drop_slab_node zangchunxin
@ 2020-09-15 12:13 ` Michal Hocko
  2020-09-15 13:04   ` [External] " Chunxin Zang
  0 siblings, 1 reply; 3+ messages in thread
From: Michal Hocko @ 2020-09-15 12:13 UTC (permalink / raw)
  To: zangchunxin; +Cc: akpm, linux-mm, linux-kernel, Muchun Song

On Tue 15-09-20 19:40:01, zangchunxin@bytedance.com wrote:
> From: Chunxin Zang <zangchunxin@bytedance.com>
> 
> On our server, there are about 10k memcg in one machine. They use memory
> very frequently. We have observed that drop_caches can take a
> considerable amount of time, and can't stop it.
> 
> There are two reasons:
> 1. There is somebody constantly generating more objects to reclaim
>    on drop_caches, result the 'freed' always bigger than 10.
> 2. The process has no chance to process signals.
> 
> We can get the following info through 'ps':
> 
>   root:~# ps -aux | grep drop
>   root  357956 ... R    Aug25 21119854:55 echo 3 > /proc/sys/vm/drop_caches
>   root 1771385 ... R    Aug16 21146421:17 echo 3 > /proc/sys/vm/drop_caches
>   root 1986319 ... R    18:56 117:27 echo 3 > /proc/sys/vm/drop_caches
>   root 2002148 ... R    Aug24 5720:39 echo 3 > /proc/sys/vm/drop_caches
>   root 2564666 ... R    18:59 113:58 echo 3 > /proc/sys/vm/drop_caches
>   root 2639347 ... R    Sep03 2383:39 echo 3 > /proc/sys/vm/drop_caches
>   root 3904747 ... R    03:35 993:31 echo 3 > /proc/sys/vm/drop_caches
>   root 4016780 ... R    Aug21 7882:18 echo 3 > /proc/sys/vm/drop_caches
> 
> Use bpftrace follow 'freed' value in drop_slab_node:
> 
>   root:~# bpftrace -e 'kprobe:drop_slab_node+70 {@ret=hist(reg("bp")); }'
>   Attaching 1 probe...
>   ^B^C
> 
>   @ret:
>   [64, 128)        1 |                                                    |
>   [128, 256)      28 |                                                    |
>   [256, 512)     107 |@                                                   |
>   [512, 1K)      298 |@@@                                                 |
>   [1K, 2K)       613 |@@@@@@@                                             |
>   [2K, 4K)      4435 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>   [4K, 8K)       442 |@@@@@                                               |
>   [8K, 16K)      299 |@@@                                                 |
>   [16K, 32K)     100 |@                                                   |
>   [32K, 64K)     139 |@                                                   |
>   [64K, 128K)     56 |                                                    |
>   [128K, 256K)    26 |                                                    |
>   [256K, 512K)     2 |                                                    |

I am not sure this is very helpful for this patch but whatever.

> We need one path to stop the process.

I would use the following instead
"
Add a bail out on the fatal signals in the main loop so that the
operation can be terminated by userspace.
"

> 
> Signed-off-by: Chunxin Zang <zangchunxin@bytedance.com>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
> 
> 	changelogs in v3: 
> 	1) update the description of the patch.
> 		v2 named: mm/vmscan: fix infinite loop in drop_slab_node
> 
> 	changelogs in v2: 
> 	1) via check fatal signal break loop.
> 
>  mm/vmscan.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index b6d84326bdf2..6b2b5d420510 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -704,6 +704,9 @@ void drop_slab_node(int nid)
>  	do {
>  		struct mem_cgroup *memcg = NULL;
>  
> +		if (signal_pending(current))
> +			return;
> +
>  		freed = 0;
>  		memcg = mem_cgroup_iter(NULL, NULL, NULL);
>  		do {
> -- 
> 2.11.0
> 

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [External] Re: [PATCH v3] mm/vmscan: add a fatal signals check in drop_slab_node
  2020-09-15 12:13 ` Michal Hocko
@ 2020-09-15 13:04   ` Chunxin Zang
  0 siblings, 0 replies; 3+ messages in thread
From: Chunxin Zang @ 2020-09-15 13:04 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Linux Memory Management List, LKML, Muchun Song

On Tue, Sep 15, 2020 at 8:13 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Tue 15-09-20 19:40:01, zangchunxin@bytedance.com wrote:
> > From: Chunxin Zang <zangchunxin@bytedance.com>
> >
> > On our server, there are about 10k memcg in one machine. They use memory
> > very frequently. We have observed that drop_caches can take a
> > considerable amount of time, and can't stop it.
> >
> > There are two reasons:
> > 1. There is somebody constantly generating more objects to reclaim
> >    on drop_caches, result the 'freed' always bigger than 10.
> > 2. The process has no chance to process signals.
> >
> > We can get the following info through 'ps':
> >
> >   root:~# ps -aux | grep drop
> >   root  357956 ... R    Aug25 21119854:55 echo 3 > /proc/sys/vm/drop_caches
> >   root 1771385 ... R    Aug16 21146421:17 echo 3 > /proc/sys/vm/drop_caches
> >   root 1986319 ... R    18:56 117:27 echo 3 > /proc/sys/vm/drop_caches
> >   root 2002148 ... R    Aug24 5720:39 echo 3 > /proc/sys/vm/drop_caches
> >   root 2564666 ... R    18:59 113:58 echo 3 > /proc/sys/vm/drop_caches
> >   root 2639347 ... R    Sep03 2383:39 echo 3 > /proc/sys/vm/drop_caches
> >   root 3904747 ... R    03:35 993:31 echo 3 > /proc/sys/vm/drop_caches
> >   root 4016780 ... R    Aug21 7882:18 echo 3 > /proc/sys/vm/drop_caches
> >
> > Use bpftrace follow 'freed' value in drop_slab_node:
> >
> >   root:~# bpftrace -e 'kprobe:drop_slab_node+70 {@ret=hist(reg("bp")); }'
> >   Attaching 1 probe...
> >   ^B^C
> >
> >   @ret:
> >   [64, 128)        1 |                                                    |
> >   [128, 256)      28 |                                                    |
> >   [256, 512)     107 |@                                                   |
> >   [512, 1K)      298 |@@@                                                 |
> >   [1K, 2K)       613 |@@@@@@@                                             |
> >   [2K, 4K)      4435 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
> >   [4K, 8K)       442 |@@@@@                                               |
> >   [8K, 16K)      299 |@@@                                                 |
> >   [16K, 32K)     100 |@                                                   |
> >   [32K, 64K)     139 |@                                                   |
> >   [64K, 128K)     56 |                                                    |
> >   [128K, 256K)    26 |                                                    |
> >   [256K, 512K)     2 |                                                    |
>
> I am not sure this is very helpful for this patch but whatever.
>

Yes, it looks a bit messy, I will delete that.

> > We need one path to stop the process.
>
> I would use the following instead
> "
> Add a bail out on the fatal signals in the main loop so that the
> operation can be terminated by userspace.
> "
>

Thanks,  will do that :)

> >
> > Signed-off-by: Chunxin Zang <zangchunxin@bytedance.com>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> Acked-by: Michal Hocko <mhocko@suse.com>
>
> > ---
> >
> >       changelogs in v3:
> >       1) update the description of the patch.
> >               v2 named: mm/vmscan: fix infinite loop in drop_slab_node
> >
> >       changelogs in v2:
> >       1) via check fatal signal break loop.
> >
> >  mm/vmscan.c | 3 +++
> >  1 file changed, 3 insertions(+)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index b6d84326bdf2..6b2b5d420510 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -704,6 +704,9 @@ void drop_slab_node(int nid)
> >       do {
> >               struct mem_cgroup *memcg = NULL;
> >
> > +             if (signal_pending(current))
> > +                     return;
> > +
> >               freed = 0;
> >               memcg = mem_cgroup_iter(NULL, NULL, NULL);
> >               do {
> > --
> > 2.11.0
> >
>
> --
> Michal Hocko
> SUSE Labs

Best wishes
Chunxin


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-09-15 13:05 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-15 11:40 [PATCH v3] mm/vmscan: add a fatal signals check in drop_slab_node zangchunxin
2020-09-15 12:13 ` Michal Hocko
2020-09-15 13:04   ` [External] " Chunxin Zang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).