* [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-09 4:19 ` Balbir Singh
0 siblings, 0 replies; 19+ messages in thread
From: Balbir Singh @ 2016-08-09 4:19 UTC (permalink / raw)
To: cgroups; +Cc: Oleg Nesterov, Andrew Morton, Tejun Heo, linux-mm
cgroup_threadgroup_rwsem is acquired in read mode during process exit and fork.
It is also grabbed in write mode during __cgroups_proc_write
I've recently run into a scenario with lots of memory pressure and OOM
and I am beginning to see
systemd
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
percpu_down_write+0x114/0x170
__cgroup_procs_write.isra.12+0xb8/0x3c0
cgroup_file_write+0x74/0x1a0
kernfs_fop_write+0x188/0x200
__vfs_write+0x6c/0xe0
vfs_write+0xc0/0x230
SyS_write+0x6c/0x110
system_call+0x38/0xb4
This thread is waiting on the reader of cgroup_threadgroup_rwsem to exit.
The reader itself is under memory pressure and has gone into reclaim after
fork. There are times the reader also ends up waiting on oom_lock as well.
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
jbd2_log_wait_commit+0xd4/0x180
ext4_evict_inode+0x88/0x5c0
evict+0xf8/0x2a0
dispose_list+0x50/0x80
prune_icache_sb+0x6c/0x90
super_cache_scan+0x190/0x210
shrink_slab.part.15+0x22c/0x4c0
shrink_zone+0x288/0x3c0
do_try_to_free_pages+0x1dc/0x590
try_to_free_pages+0xdc/0x260
__alloc_pages_nodemask+0x72c/0xc90
alloc_pages_current+0xb4/0x1a0
page_table_alloc+0xc0/0x170
__pte_alloc+0x58/0x1f0
copy_page_range+0x4ec/0x950
copy_process.isra.5+0x15a0/0x1870
_do_fork+0xa8/0x4b0
ppc_clone+0x8/0xc
In the meanwhile, all processes exiting/forking are blocked
Samples of tasks stuck
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
rwsem_down_read_failed+0x124/0x1b0
percpu_down_read+0xe0/0xf0
exit_signals+0x40/0x1b0
do_exit+0xcc/0xc30
do_group_exit+0x64/0x100
get_signal+0x55c/0x7b0
do_signal+0x54/0x2b0
do_notify_resume+0xbc/0xd0
ret_from_except_lite+0x64/0x68
Call Trace:
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
rwsem_down_read_failed+0x124/0x1b0
percpu_down_read+0xe0/0xf0
exit_signals+0x40/0x1b0
do_exit+0xcc/0xc30
do_group_exit+0x64/0x100
get_signal+0x55c/0x7b0
do_signal+0x54/0x2b0
do_notify_resume+0xbc/0xd0
ret_from_except_lite+0x64/0x68
Call Trace:
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
rwsem_down_read_failed+0x124/0x1b0
percpu_down_read+0xe0/0xf0
exit_signals+0x40/0x1b0
do_exit+0xcc/0xc30
do_group_exit+0x64/0x100
get_signal+0x55c/0x7b0
do_signal+0x54/0x2b0
do_notify_resume+0xbc/0xd0
ret_from_except_lite+0x64/0x68
Call Trace:
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
rwsem_down_read_failed+0x124/0x1b0
percpu_down_read+0xe0/0xf0
exit_signals+0x40/0x1b0
do_exit+0xcc/0xc30
do_group_exit+0x64/0x100
get_signal+0x55c/0x7b0
do_signal+0x54/0x2b0
do_notify_resume+0xbc/0xd0
ret_from_except_lite+0x64/0x68
Call Trace:
handle_mm_fault+0xde4/0x1980 (unreliable)
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
rwsem_down_read_failed+0x124/0x1b0
percpu_down_read+0xe0/0xf0
copy_process.isra.5+0x4bc/0x1870
_do_fork+0xa8/0x4b0
ppc_clone+0x8/0xc
This almost stalls the system, this patch moves the threadgroup_change_begin
from before cgroup_fork() to just before cgroup_canfork(). Ideally we shouldn't
have to worry about threadgroup changes till the task is actually added to
the threadgroup. This avoids having to call reclaim with cgroup_threadgroup_rwsem
held.
There are other theoretical issues with this semaphore
systemd can do
1. cgroup_mutex (cgroup_kn_lock_live)
2. cgroup_threadgroup_rwsem (W) (__cgroup_procs_write)
and other threads can go
1. cgroup_threadgroup_rwsem (R) (copy_process)
2. mem_cgroup_iter (as a part of reclaim) (cgroup_mutex -- rcu lock or cgroup_mutex)
However, I've not examined them in too much detail or looked at lockdep
wait chains for those paths.
I am sure there is a good reason for placing cgroup_threadgroup_rwsem
where it is today and I might be missing something. I am also surprised
no-one else has run into it so far.
Comments?
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
kernel/fork.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/fork.c b/kernel/fork.c
index 5c2c355..0474fa8 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1406,7 +1406,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
p->real_start_time = ktime_get_boot_ns();
p->io_context = NULL;
p->audit_context = NULL;
- threadgroup_change_begin(current);
cgroup_fork(p);
#ifdef CONFIG_NUMA
p->mempolicy = mpol_dup(p->mempolicy);
@@ -1558,6 +1557,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
INIT_LIST_HEAD(&p->thread_group);
p->task_works = NULL;
+ threadgroup_change_begin(current);
/*
* Ensure that the cgroup subsystem policies allow the new process to be
* forked. It should be noted the the new process's css_set can be changed
@@ -1658,6 +1658,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
bad_fork_cancel_cgroup:
cgroup_cancel_fork(p);
bad_fork_free_pid:
+ threadgroup_change_end(current);
if (pid != &init_struct_pid)
free_pid(pid);
bad_fork_cleanup_thread:
@@ -1690,7 +1691,6 @@ bad_fork_cleanup_policy:
mpol_put(p->mempolicy);
bad_fork_cleanup_threadgroup_lock:
#endif
- threadgroup_change_end(current);
delayacct_tsk_free(p);
bad_fork_cleanup_count:
atomic_dec(&p->cred->user->processes);
--
2.5.5
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-09 4:19 ` Balbir Singh
0 siblings, 0 replies; 19+ messages in thread
From: Balbir Singh @ 2016-08-09 4:19 UTC (permalink / raw)
To: cgroups-u79uwXL29TY76Z2rM5mHXA
Cc: Oleg Nesterov, Andrew Morton, Tejun Heo, linux-mm-Bw31MaZKKs3YtjvyW6yDsg
cgroup_threadgroup_rwsem is acquired in read mode during process exit and fork.
It is also grabbed in write mode during __cgroups_proc_write
I've recently run into a scenario with lots of memory pressure and OOM
and I am beginning to see
systemd
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
percpu_down_write+0x114/0x170
__cgroup_procs_write.isra.12+0xb8/0x3c0
cgroup_file_write+0x74/0x1a0
kernfs_fop_write+0x188/0x200
__vfs_write+0x6c/0xe0
vfs_write+0xc0/0x230
SyS_write+0x6c/0x110
system_call+0x38/0xb4
This thread is waiting on the reader of cgroup_threadgroup_rwsem to exit.
The reader itself is under memory pressure and has gone into reclaim after
fork. There are times the reader also ends up waiting on oom_lock as well.
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
jbd2_log_wait_commit+0xd4/0x180
ext4_evict_inode+0x88/0x5c0
evict+0xf8/0x2a0
dispose_list+0x50/0x80
prune_icache_sb+0x6c/0x90
super_cache_scan+0x190/0x210
shrink_slab.part.15+0x22c/0x4c0
shrink_zone+0x288/0x3c0
do_try_to_free_pages+0x1dc/0x590
try_to_free_pages+0xdc/0x260
__alloc_pages_nodemask+0x72c/0xc90
alloc_pages_current+0xb4/0x1a0
page_table_alloc+0xc0/0x170
__pte_alloc+0x58/0x1f0
copy_page_range+0x4ec/0x950
copy_process.isra.5+0x15a0/0x1870
_do_fork+0xa8/0x4b0
ppc_clone+0x8/0xc
In the meanwhile, all processes exiting/forking are blocked
Samples of tasks stuck
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
rwsem_down_read_failed+0x124/0x1b0
percpu_down_read+0xe0/0xf0
exit_signals+0x40/0x1b0
do_exit+0xcc/0xc30
do_group_exit+0x64/0x100
get_signal+0x55c/0x7b0
do_signal+0x54/0x2b0
do_notify_resume+0xbc/0xd0
ret_from_except_lite+0x64/0x68
Call Trace:
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
rwsem_down_read_failed+0x124/0x1b0
percpu_down_read+0xe0/0xf0
exit_signals+0x40/0x1b0
do_exit+0xcc/0xc30
do_group_exit+0x64/0x100
get_signal+0x55c/0x7b0
do_signal+0x54/0x2b0
do_notify_resume+0xbc/0xd0
ret_from_except_lite+0x64/0x68
Call Trace:
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
rwsem_down_read_failed+0x124/0x1b0
percpu_down_read+0xe0/0xf0
exit_signals+0x40/0x1b0
do_exit+0xcc/0xc30
do_group_exit+0x64/0x100
get_signal+0x55c/0x7b0
do_signal+0x54/0x2b0
do_notify_resume+0xbc/0xd0
ret_from_except_lite+0x64/0x68
Call Trace:
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
rwsem_down_read_failed+0x124/0x1b0
percpu_down_read+0xe0/0xf0
exit_signals+0x40/0x1b0
do_exit+0xcc/0xc30
do_group_exit+0x64/0x100
get_signal+0x55c/0x7b0
do_signal+0x54/0x2b0
do_notify_resume+0xbc/0xd0
ret_from_except_lite+0x64/0x68
Call Trace:
handle_mm_fault+0xde4/0x1980 (unreliable)
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
rwsem_down_read_failed+0x124/0x1b0
percpu_down_read+0xe0/0xf0
copy_process.isra.5+0x4bc/0x1870
_do_fork+0xa8/0x4b0
ppc_clone+0x8/0xc
This almost stalls the system, this patch moves the threadgroup_change_begin
from before cgroup_fork() to just before cgroup_canfork(). Ideally we shouldn't
have to worry about threadgroup changes till the task is actually added to
the threadgroup. This avoids having to call reclaim with cgroup_threadgroup_rwsem
held.
There are other theoretical issues with this semaphore
systemd can do
1. cgroup_mutex (cgroup_kn_lock_live)
2. cgroup_threadgroup_rwsem (W) (__cgroup_procs_write)
and other threads can go
1. cgroup_threadgroup_rwsem (R) (copy_process)
2. mem_cgroup_iter (as a part of reclaim) (cgroup_mutex -- rcu lock or cgroup_mutex)
However, I've not examined them in too much detail or looked at lockdep
wait chains for those paths.
I am sure there is a good reason for placing cgroup_threadgroup_rwsem
where it is today and I might be missing something. I am also surprised
no-one else has run into it so far.
Comments?
Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Signed-off-by: Balbir Singh <bsingharora-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
---
kernel/fork.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/fork.c b/kernel/fork.c
index 5c2c355..0474fa8 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1406,7 +1406,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
p->real_start_time = ktime_get_boot_ns();
p->io_context = NULL;
p->audit_context = NULL;
- threadgroup_change_begin(current);
cgroup_fork(p);
#ifdef CONFIG_NUMA
p->mempolicy = mpol_dup(p->mempolicy);
@@ -1558,6 +1557,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
INIT_LIST_HEAD(&p->thread_group);
p->task_works = NULL;
+ threadgroup_change_begin(current);
/*
* Ensure that the cgroup subsystem policies allow the new process to be
* forked. It should be noted the the new process's css_set can be changed
@@ -1658,6 +1658,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
bad_fork_cancel_cgroup:
cgroup_cancel_fork(p);
bad_fork_free_pid:
+ threadgroup_change_end(current);
if (pid != &init_struct_pid)
free_pid(pid);
bad_fork_cleanup_thread:
@@ -1690,7 +1691,6 @@ bad_fork_cleanup_policy:
mpol_put(p->mempolicy);
bad_fork_cleanup_threadgroup_lock:
#endif
- threadgroup_change_end(current);
delayacct_tsk_free(p);
bad_fork_cleanup_count:
atomic_dec(&p->cred->user->processes);
--
2.5.5
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-09 6:29 ` Tejun Heo
0 siblings, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2016-08-09 6:29 UTC (permalink / raw)
To: Balbir Singh; +Cc: cgroups, Oleg Nesterov, Andrew Morton, linux-mm
Hello, Balbir.
On Tue, Aug 09, 2016 at 02:19:01PM +1000, Balbir Singh wrote:
>
> cgroup_threadgroup_rwsem is acquired in read mode during process exit and fork.
> It is also grabbed in write mode during __cgroups_proc_write
>
> I've recently run into a scenario with lots of memory pressure and OOM
> and I am beginning to see
>
> systemd
>
> __switch_to+0x1f8/0x350
> __schedule+0x30c/0x990
> schedule+0x48/0xc0
> percpu_down_write+0x114/0x170
> __cgroup_procs_write.isra.12+0xb8/0x3c0
> cgroup_file_write+0x74/0x1a0
> kernfs_fop_write+0x188/0x200
> __vfs_write+0x6c/0xe0
> vfs_write+0xc0/0x230
> SyS_write+0x6c/0x110
> system_call+0x38/0xb4
>
> This thread is waiting on the reader of cgroup_threadgroup_rwsem to exit.
> The reader itself is under memory pressure and has gone into reclaim after
> fork. There are times the reader also ends up waiting on oom_lock as well.
>
...
> copy_page_range+0x4ec/0x950
> copy_process.isra.5+0x15a0/0x1870
> _do_fork+0xa8/0x4b0
> ppc_clone+0x8/0xc
Yeah, we definitely don't wanna be holding the rwsem during the actual
fork.
...
> There are other theoretical issues with this semaphore
>
> systemd can do
>
> 1. cgroup_mutex (cgroup_kn_lock_live)
> 2. cgroup_threadgroup_rwsem (W) (__cgroup_procs_write)
>
> and other threads can go
>
> 1. cgroup_threadgroup_rwsem (R) (copy_process)
> 2. mem_cgroup_iter (as a part of reclaim) (cgroup_mutex -- rcu lock or cgroup_mutex)
Hmm? Where does mem_cgroup_iter grab cgroup_mutex? cgroup_mutex nests
outside cgroup_threadgroup_rwsem or most other mutexes for that matter
and isn't exposed from cgroup core.
> However, I've not examined them in too much detail or looked at lockdep
> wait chains for those paths.
>
> I am sure there is a good reason for placing cgroup_threadgroup_rwsem
> where it is today and I might be missing something. I am also surprised
I could be missing something too but the positioning is largely
historic.
> no-one else has run into it so far.
Maybe it might matter that much on a system which is already heavily
thrasing, but yeah, we definitely want to tighten down the reader
sections so that it doesn't get in the way of making forward progress.
> Comments?
The change looks good to me on the first glance but I'll think more
about it tomorrow.
Thanks!
--
tejun
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-09 6:29 ` Tejun Heo
0 siblings, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2016-08-09 6:29 UTC (permalink / raw)
To: Balbir Singh
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, Oleg Nesterov, Andrew Morton,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg
Hello, Balbir.
On Tue, Aug 09, 2016 at 02:19:01PM +1000, Balbir Singh wrote:
>
> cgroup_threadgroup_rwsem is acquired in read mode during process exit and fork.
> It is also grabbed in write mode during __cgroups_proc_write
>
> I've recently run into a scenario with lots of memory pressure and OOM
> and I am beginning to see
>
> systemd
>
> __switch_to+0x1f8/0x350
> __schedule+0x30c/0x990
> schedule+0x48/0xc0
> percpu_down_write+0x114/0x170
> __cgroup_procs_write.isra.12+0xb8/0x3c0
> cgroup_file_write+0x74/0x1a0
> kernfs_fop_write+0x188/0x200
> __vfs_write+0x6c/0xe0
> vfs_write+0xc0/0x230
> SyS_write+0x6c/0x110
> system_call+0x38/0xb4
>
> This thread is waiting on the reader of cgroup_threadgroup_rwsem to exit.
> The reader itself is under memory pressure and has gone into reclaim after
> fork. There are times the reader also ends up waiting on oom_lock as well.
>
...
> copy_page_range+0x4ec/0x950
> copy_process.isra.5+0x15a0/0x1870
> _do_fork+0xa8/0x4b0
> ppc_clone+0x8/0xc
Yeah, we definitely don't wanna be holding the rwsem during the actual
fork.
...
> There are other theoretical issues with this semaphore
>
> systemd can do
>
> 1. cgroup_mutex (cgroup_kn_lock_live)
> 2. cgroup_threadgroup_rwsem (W) (__cgroup_procs_write)
>
> and other threads can go
>
> 1. cgroup_threadgroup_rwsem (R) (copy_process)
> 2. mem_cgroup_iter (as a part of reclaim) (cgroup_mutex -- rcu lock or cgroup_mutex)
Hmm? Where does mem_cgroup_iter grab cgroup_mutex? cgroup_mutex nests
outside cgroup_threadgroup_rwsem or most other mutexes for that matter
and isn't exposed from cgroup core.
> However, I've not examined them in too much detail or looked at lockdep
> wait chains for those paths.
>
> I am sure there is a good reason for placing cgroup_threadgroup_rwsem
> where it is today and I might be missing something. I am also surprised
I could be missing something too but the positioning is largely
historic.
> no-one else has run into it so far.
Maybe it might matter that much on a system which is already heavily
thrasing, but yeah, we definitely want to tighten down the reader
sections so that it doesn't get in the way of making forward progress.
> Comments?
The change looks good to me on the first glance but I'll think more
about it tomorrow.
Thanks!
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
2016-08-09 6:29 ` Tejun Heo
(?)
@ 2016-08-09 7:02 ` Balbir Singh
2016-08-09 14:26 ` Tejun Heo
-1 siblings, 1 reply; 19+ messages in thread
From: Balbir Singh @ 2016-08-09 7:02 UTC (permalink / raw)
To: Tejun Heo; +Cc: cgroups, Oleg Nesterov, Andrew Morton, linux-mm
On 09/08/16 16:29, Tejun Heo wrote:
> Hello, Balbir.
>
> On Tue, Aug 09, 2016 at 02:19:01PM +1000, Balbir Singh wrote:
>>
>> cgroup_threadgroup_rwsem is acquired in read mode during process exit and fork.
>> It is also grabbed in write mode during __cgroups_proc_write
>>
>> I've recently run into a scenario with lots of memory pressure and OOM
>> and I am beginning to see
>>
>> systemd
>>
>> __switch_to+0x1f8/0x350
>> __schedule+0x30c/0x990
>> schedule+0x48/0xc0
>> percpu_down_write+0x114/0x170
>> __cgroup_procs_write.isra.12+0xb8/0x3c0
>> cgroup_file_write+0x74/0x1a0
>> kernfs_fop_write+0x188/0x200
>> __vfs_write+0x6c/0xe0
>> vfs_write+0xc0/0x230
>> SyS_write+0x6c/0x110
>> system_call+0x38/0xb4
>>
>> This thread is waiting on the reader of cgroup_threadgroup_rwsem to exit.
>> The reader itself is under memory pressure and has gone into reclaim after
>> fork. There are times the reader also ends up waiting on oom_lock as well.
>>
> ...
>> copy_page_range+0x4ec/0x950
>> copy_process.isra.5+0x15a0/0x1870
>> _do_fork+0xa8/0x4b0
>> ppc_clone+0x8/0xc
>
> Yeah, we definitely don't wanna be holding the rwsem during the actual
> fork.
>
> ...
>> There are other theoretical issues with this semaphore
>>
>> systemd can do
>>
>> 1. cgroup_mutex (cgroup_kn_lock_live)
>> 2. cgroup_threadgroup_rwsem (W) (__cgroup_procs_write)
>>
>> and other threads can go
>>
>> 1. cgroup_threadgroup_rwsem (R) (copy_process)
>> 2. mem_cgroup_iter (as a part of reclaim) (cgroup_mutex -- rcu lock or cgroup_mutex)
>
> Hmm? Where does mem_cgroup_iter grab cgroup_mutex? cgroup_mutex nests
> outside cgroup_threadgroup_rwsem or most other mutexes for that matter
> and isn't exposed from cgroup core.
>
I based my theory on the code
mem_cgroup_iter -> css_next_descendant_pre which asserts
cgroup_assert_mutex_or_rcu_locked(),
although you are right, we hold RCU lock while calling css_* routines.
>> However, I've not examined them in too much detail or looked at lockdep
>> wait chains for those paths.
>>
>> I am sure there is a good reason for placing cgroup_threadgroup_rwsem
>> where it is today and I might be missing something. I am also surprised
>
> I could be missing something too but the positioning is largely
> historic.
>
>> no-one else has run into it so far.
>
> Maybe it might matter that much on a system which is already heavily
> thrasing, but yeah, we definitely want to tighten down the reader
> sections so that it doesn't get in the way of making forward progress.
>
It seems to cause my system to thrash quite badly.
>> Comments?
>
> The change looks good to me on the first glance but I'll think more
> about it tomorrow.
>
> Thanks!
>
Thanks for the review.
Balbir Singh.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-09 9:00 ` Zefan Li
0 siblings, 0 replies; 19+ messages in thread
From: Zefan Li @ 2016-08-09 9:00 UTC (permalink / raw)
To: Balbir Singh, cgroups; +Cc: Oleg Nesterov, Andrew Morton, Tejun Heo, linux-mm
> This almost stalls the system, this patch moves the threadgroup_change_begin
> from before cgroup_fork() to just before cgroup_canfork(). Ideally we shouldn't
> have to worry about threadgroup changes till the task is actually added to
> the threadgroup. This avoids having to call reclaim with cgroup_threadgroup_rwsem
> held.
>
> There are other theoretical issues with this semaphore
>
> systemd can do
>
> 1. cgroup_mutex (cgroup_kn_lock_live)
> 2. cgroup_threadgroup_rwsem (W) (__cgroup_procs_write)
>
> and other threads can go
>
> 1. cgroup_threadgroup_rwsem (R) (copy_process)
> 2. mem_cgroup_iter (as a part of reclaim) (cgroup_mutex -- rcu lock or cgroup_mutex)
>
> However, I've not examined them in too much detail or looked at lockdep
> wait chains for those paths.
>
> I am sure there is a good reason for placing cgroup_threadgroup_rwsem
> where it is today and I might be missing something. I am also surprised
> no-one else has run into it so far.
>
> Comments?
>
We used to use cgroup_threadgroup_rwsem for syncronization between threads
in the same threadgroup, but now it has evolved to ensure atomic operations
across multi processes.
For example, I'm trying to fix a race. See https://lkml.org/lkml/2016/8/8/900
And the fix kind of relies on the fact that cgroup_post_fork() is placed
inside the read section of cgroup_threadgroup_rwsem, so that cpuset_fork()
won't race with cgroup migration.
> Cc: Oleg Nesterov <oleg@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Tejun Heo <tj@kernel.org>
>
> Signed-off-by: Balbir Singh <bsingharora@gmail.com>
> ---
> kernel/fork.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 5c2c355..0474fa8 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1406,7 +1406,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
> p->real_start_time = ktime_get_boot_ns();
> p->io_context = NULL;
> p->audit_context = NULL;
> - threadgroup_change_begin(current);
> cgroup_fork(p);
> #ifdef CONFIG_NUMA
> p->mempolicy = mpol_dup(p->mempolicy);
> @@ -1558,6 +1557,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
> INIT_LIST_HEAD(&p->thread_group);
> p->task_works = NULL;
>
> + threadgroup_change_begin(current);
> /*
> * Ensure that the cgroup subsystem policies allow the new process to be
> * forked. It should be noted the the new process's css_set can be changed
> @@ -1658,6 +1658,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
> bad_fork_cancel_cgroup:
> cgroup_cancel_fork(p);
> bad_fork_free_pid:
> + threadgroup_change_end(current);
> if (pid != &init_struct_pid)
> free_pid(pid);
> bad_fork_cleanup_thread:
> @@ -1690,7 +1691,6 @@ bad_fork_cleanup_policy:
> mpol_put(p->mempolicy);
> bad_fork_cleanup_threadgroup_lock:
> #endif
> - threadgroup_change_end(current);
> delayacct_tsk_free(p);
> bad_fork_cleanup_count:
> atomic_dec(&p->cred->user->processes);
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-09 9:00 ` Zefan Li
0 siblings, 0 replies; 19+ messages in thread
From: Zefan Li @ 2016-08-09 9:00 UTC (permalink / raw)
To: Balbir Singh, cgroups-u79uwXL29TY76Z2rM5mHXA
Cc: Oleg Nesterov, Andrew Morton, Tejun Heo, linux-mm-Bw31MaZKKs3YtjvyW6yDsg
> This almost stalls the system, this patch moves the threadgroup_change_begin
> from before cgroup_fork() to just before cgroup_canfork(). Ideally we shouldn't
> have to worry about threadgroup changes till the task is actually added to
> the threadgroup. This avoids having to call reclaim with cgroup_threadgroup_rwsem
> held.
>
> There are other theoretical issues with this semaphore
>
> systemd can do
>
> 1. cgroup_mutex (cgroup_kn_lock_live)
> 2. cgroup_threadgroup_rwsem (W) (__cgroup_procs_write)
>
> and other threads can go
>
> 1. cgroup_threadgroup_rwsem (R) (copy_process)
> 2. mem_cgroup_iter (as a part of reclaim) (cgroup_mutex -- rcu lock or cgroup_mutex)
>
> However, I've not examined them in too much detail or looked at lockdep
> wait chains for those paths.
>
> I am sure there is a good reason for placing cgroup_threadgroup_rwsem
> where it is today and I might be missing something. I am also surprised
> no-one else has run into it so far.
>
> Comments?
>
We used to use cgroup_threadgroup_rwsem for syncronization between threads
in the same threadgroup, but now it has evolved to ensure atomic operations
across multi processes.
For example, I'm trying to fix a race. See https://lkml.org/lkml/2016/8/8/900
And the fix kind of relies on the fact that cgroup_post_fork() is placed
inside the read section of cgroup_threadgroup_rwsem, so that cpuset_fork()
won't race with cgroup migration.
> Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
> Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
>
> Signed-off-by: Balbir Singh <bsingharora-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> ---
> kernel/fork.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 5c2c355..0474fa8 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1406,7 +1406,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
> p->real_start_time = ktime_get_boot_ns();
> p->io_context = NULL;
> p->audit_context = NULL;
> - threadgroup_change_begin(current);
> cgroup_fork(p);
> #ifdef CONFIG_NUMA
> p->mempolicy = mpol_dup(p->mempolicy);
> @@ -1558,6 +1557,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
> INIT_LIST_HEAD(&p->thread_group);
> p->task_works = NULL;
>
> + threadgroup_change_begin(current);
> /*
> * Ensure that the cgroup subsystem policies allow the new process to be
> * forked. It should be noted the the new process's css_set can be changed
> @@ -1658,6 +1658,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
> bad_fork_cancel_cgroup:
> cgroup_cancel_fork(p);
> bad_fork_free_pid:
> + threadgroup_change_end(current);
> if (pid != &init_struct_pid)
> free_pid(pid);
> bad_fork_cleanup_thread:
> @@ -1690,7 +1691,6 @@ bad_fork_cleanup_policy:
> mpol_put(p->mempolicy);
> bad_fork_cleanup_threadgroup_lock:
> #endif
> - threadgroup_change_end(current);
> delayacct_tsk_free(p);
> bad_fork_cleanup_count:
> atomic_dec(&p->cred->user->processes);
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
2016-08-09 9:00 ` Zefan Li
(?)
@ 2016-08-09 13:57 ` Balbir Singh
2016-08-10 1:31 ` Zefan Li
-1 siblings, 1 reply; 19+ messages in thread
From: Balbir Singh @ 2016-08-09 13:57 UTC (permalink / raw)
To: Zefan Li
Cc: Balbir Singh, cgroups, Oleg Nesterov, Andrew Morton, Tejun Heo, linux-mm
On Tue, Aug 09, 2016 at 05:00:59PM +0800, Zefan Li wrote:
> > This almost stalls the system, this patch moves the threadgroup_change_begin
> > from before cgroup_fork() to just before cgroup_canfork(). Ideally we shouldn't
> > have to worry about threadgroup changes till the task is actually added to
> > the threadgroup. This avoids having to call reclaim with cgroup_threadgroup_rwsem
> > held.
> >
> > There are other theoretical issues with this semaphore
> >
> > systemd can do
> >
> > 1. cgroup_mutex (cgroup_kn_lock_live)
> > 2. cgroup_threadgroup_rwsem (W) (__cgroup_procs_write)
> >
> > and other threads can go
> >
> > 1. cgroup_threadgroup_rwsem (R) (copy_process)
> > 2. mem_cgroup_iter (as a part of reclaim) (cgroup_mutex -- rcu lock or cgroup_mutex)
> >
> > However, I've not examined them in too much detail or looked at lockdep
> > wait chains for those paths.
> >
> > I am sure there is a good reason for placing cgroup_threadgroup_rwsem
> > where it is today and I might be missing something. I am also surprised
> > no-one else has run into it so far.
> >
> > Comments?
> >
>
> We used to use cgroup_threadgroup_rwsem for syncronization between threads
> in the same threadgroup, but now it has evolved to ensure atomic operations
> across multi processes.
>
Yes and it seems incorrect
> For example, I'm trying to fix a race. See https://lkml.org/lkml/2016/8/8/900
>
> And the fix kind of relies on the fact that cgroup_post_fork() is placed
> inside the read section of cgroup_threadgroup_rwsem, so that cpuset_fork()
> won't race with cgroup migration.
>
My patch retains that behaviour, before ss->fork() is called we hold
the cgroup_threadgroup_rwsem, in fact it is held prior to ss->can_fork()
Balbir
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-09 14:26 ` Tejun Heo
0 siblings, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2016-08-09 14:26 UTC (permalink / raw)
To: Balbir Singh; +Cc: cgroups, Oleg Nesterov, Andrew Morton, linux-mm
Hello, Balbir.
On Tue, Aug 09, 2016 at 05:02:47PM +1000, Balbir Singh wrote:
> > Hmm? Where does mem_cgroup_iter grab cgroup_mutex? cgroup_mutex nests
> > outside cgroup_threadgroup_rwsem or most other mutexes for that matter
> > and isn't exposed from cgroup core.
> >
>
> I based my theory on the code
>
> mem_cgroup_iter -> css_next_descendant_pre which asserts
>
> cgroup_assert_mutex_or_rcu_locked(),
>
> although you are right, we hold RCU lock while calling css_* routines.
That's "or". The iterator can be called either with RCU lock or
cgroup_mutex. cgroup core may use it under cgroup_mutex. Everyone
else uses it with rcu.
Thanks.
--
tejun
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-09 14:26 ` Tejun Heo
0 siblings, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2016-08-09 14:26 UTC (permalink / raw)
To: Balbir Singh
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, Oleg Nesterov, Andrew Morton,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg
Hello, Balbir.
On Tue, Aug 09, 2016 at 05:02:47PM +1000, Balbir Singh wrote:
> > Hmm? Where does mem_cgroup_iter grab cgroup_mutex? cgroup_mutex nests
> > outside cgroup_threadgroup_rwsem or most other mutexes for that matter
> > and isn't exposed from cgroup core.
> >
>
> I based my theory on the code
>
> mem_cgroup_iter -> css_next_descendant_pre which asserts
>
> cgroup_assert_mutex_or_rcu_locked(),
>
> although you are right, we hold RCU lock while calling css_* routines.
That's "or". The iterator can be called either with RCU lock or
cgroup_mutex. cgroup core may use it under cgroup_mutex. Everyone
else uses it with rcu.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
2016-08-09 4:19 ` Balbir Singh
` (2 preceding siblings ...)
(?)
@ 2016-08-09 18:09 ` Oleg Nesterov
-1 siblings, 0 replies; 19+ messages in thread
From: Oleg Nesterov @ 2016-08-09 18:09 UTC (permalink / raw)
To: Balbir Singh; +Cc: cgroups, Andrew Morton, Tejun Heo, linux-mm
On 08/09, Balbir Singh wrote:
>
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1406,7 +1406,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
> p->real_start_time = ktime_get_boot_ns();
> p->io_context = NULL;
> p->audit_context = NULL;
> - threadgroup_change_begin(current);
> cgroup_fork(p);
> #ifdef CONFIG_NUMA
> p->mempolicy = mpol_dup(p->mempolicy);
> @@ -1558,6 +1557,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
> INIT_LIST_HEAD(&p->thread_group);
> p->task_works = NULL;
>
> + threadgroup_change_begin(current);
> /*
> * Ensure that the cgroup subsystem policies allow the new process to be
> * forked. It should be noted the the new process's css_set can be changed
> @@ -1658,6 +1658,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
> bad_fork_cancel_cgroup:
> cgroup_cancel_fork(p);
> bad_fork_free_pid:
> + threadgroup_change_end(current);
> if (pid != &init_struct_pid)
> free_pid(pid);
> bad_fork_cleanup_thread:
> @@ -1690,7 +1691,6 @@ bad_fork_cleanup_policy:
> mpol_put(p->mempolicy);
> bad_fork_cleanup_threadgroup_lock:
> #endif
> - threadgroup_change_end(current);
> delayacct_tsk_free(p);
> bad_fork_cleanup_count:
> atomic_dec(&p->cred->user->processes);
I can't really review this change... but it looks good to me.
Oleg.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-10 1:21 ` Balbir Singh
0 siblings, 0 replies; 19+ messages in thread
From: Balbir Singh @ 2016-08-10 1:21 UTC (permalink / raw)
To: Tejun Heo; +Cc: cgroups, Oleg Nesterov, Andrew Morton, linux-mm
On Wed, Aug 10, 2016 at 12:26 AM, Tejun Heo <tj@kernel.org> wrote:
> Hello, Balbir.
>
> On Tue, Aug 09, 2016 at 05:02:47PM +1000, Balbir Singh wrote:
>> > Hmm? Where does mem_cgroup_iter grab cgroup_mutex? cgroup_mutex nests
>> > outside cgroup_threadgroup_rwsem or most other mutexes for that matter
>> > and isn't exposed from cgroup core.
>> >
>>
>> I based my theory on the code
>>
>> mem_cgroup_iter -> css_next_descendant_pre which asserts
>>
>> cgroup_assert_mutex_or_rcu_locked(),
>>
>> although you are right, we hold RCU lock while calling css_* routines.
>
> That's "or". The iterator can be called either with RCU lock or
> cgroup_mutex. cgroup core may use it under cgroup_mutex. Everyone
> else uses it with rcu.
>
> Thanks.
>
Hi, Tejun
Thanks agreed! Could you please consider queuing the fix after review.
Balbir Singh.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-10 1:21 ` Balbir Singh
0 siblings, 0 replies; 19+ messages in thread
From: Balbir Singh @ 2016-08-10 1:21 UTC (permalink / raw)
To: Tejun Heo
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, Oleg Nesterov, Andrew Morton,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg
On Wed, Aug 10, 2016 at 12:26 AM, Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
> Hello, Balbir.
>
> On Tue, Aug 09, 2016 at 05:02:47PM +1000, Balbir Singh wrote:
>> > Hmm? Where does mem_cgroup_iter grab cgroup_mutex? cgroup_mutex nests
>> > outside cgroup_threadgroup_rwsem or most other mutexes for that matter
>> > and isn't exposed from cgroup core.
>> >
>>
>> I based my theory on the code
>>
>> mem_cgroup_iter -> css_next_descendant_pre which asserts
>>
>> cgroup_assert_mutex_or_rcu_locked(),
>>
>> although you are right, we hold RCU lock while calling css_* routines.
>
> That's "or". The iterator can be called either with RCU lock or
> cgroup_mutex. cgroup core may use it under cgroup_mutex. Everyone
> else uses it with rcu.
>
> Thanks.
>
Hi, Tejun
Thanks agreed! Could you please consider queuing the fix after review.
Balbir Singh.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
2016-08-09 13:57 ` Balbir Singh
@ 2016-08-10 1:31 ` Zefan Li
0 siblings, 0 replies; 19+ messages in thread
From: Zefan Li @ 2016-08-10 1:31 UTC (permalink / raw)
To: bsingharora; +Cc: cgroups, Oleg Nesterov, Andrew Morton, Tejun Heo, linux-mm
>> For example, I'm trying to fix a race. See https://lkml.org/lkml/2016/8/8/900
>>
>> And the fix kind of relies on the fact that cgroup_post_fork() is placed
>> inside the read section of cgroup_threadgroup_rwsem, so that cpuset_fork()
>> won't race with cgroup migration.
>>
>
> My patch retains that behaviour, before ss->fork() is called we hold
> the cgroup_threadgroup_rwsem, in fact it is held prior to ss->can_fork()
>
I read the patch again and now I see only threadgroup_change_begin() is moved
downwards, and threadgroup_change_end() remains intact. Then I have no problem
with it.
Acked-by: Zefan Li <lizefan@huawei.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-10 1:31 ` Zefan Li
0 siblings, 0 replies; 19+ messages in thread
From: Zefan Li @ 2016-08-10 1:31 UTC (permalink / raw)
To: bsingharora-Re5JQEeQqe8AvxtiuMwx3w
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, Oleg Nesterov, Andrew Morton,
Tejun Heo, linux-mm-Bw31MaZKKs3YtjvyW6yDsg
>> For example, I'm trying to fix a race. See https://lkml.org/lkml/2016/8/8/900
>>
>> And the fix kind of relies on the fact that cgroup_post_fork() is placed
>> inside the read section of cgroup_threadgroup_rwsem, so that cpuset_fork()
>> won't race with cgroup migration.
>>
>
> My patch retains that behaviour, before ss->fork() is called we hold
> the cgroup_threadgroup_rwsem, in fact it is held prior to ss->can_fork()
>
I read the patch again and now I see only threadgroup_change_begin() is moved
downwards, and threadgroup_change_end() remains intact. Then I have no problem
with it.
Acked-by: Zefan Li <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
2016-08-09 4:19 ` Balbir Singh
@ 2016-08-10 19:43 ` Tejun Heo
-1 siblings, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2016-08-10 19:43 UTC (permalink / raw)
To: Balbir Singh; +Cc: cgroups, Oleg Nesterov, Andrew Morton, linux-mm
Hello,
Edited subject and description and applied the patch to
cgroup/for-4.8-fixes w/ stable cc'd.
Thanks.
------ 8< ------
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-10 19:43 ` Tejun Heo
0 siblings, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2016-08-10 19:43 UTC (permalink / raw)
To: Balbir Singh; +Cc: cgroups, Oleg Nesterov, Andrew Morton, linux-mm
Hello,
Edited subject and description and applied the patch to
cgroup/for-4.8-fixes w/ stable cc'd.
Thanks.
------ 8< ------
From b570e91aaa563673dc6cca58c14388d68c767353 Mon Sep 17 00:00:00 2001
From: Balbir Singh <bsingharora@gmail.com>
Date: Tue, 9 Aug 2016 14:19:01 +1000
Subject: [PATCH] cgroup: reduce read locked section of
cgroup_threadgroup_rwsem during fork
cgroup_threadgroup_rwsem is acquired in read mode during process exit
and fork. It is also grabbed in write mode during
__cgroups_proc_write(). I've recently run into a scenario with lots
of memory pressure and OOM and I am beginning to see
systemd
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
percpu_down_write+0x114/0x170
__cgroup_procs_write.isra.12+0xb8/0x3c0
cgroup_file_write+0x74/0x1a0
kernfs_fop_write+0x188/0x200
__vfs_write+0x6c/0xe0
vfs_write+0xc0/0x230
SyS_write+0x6c/0x110
system_call+0x38/0xb4
This thread is waiting on the reader of cgroup_threadgroup_rwsem to
exit. The reader itself is under memory pressure and has gone into
reclaim after fork. There are times the reader also ends up waiting on
oom_lock as well.
__switch_to+0x1f8/0x350
__schedule+0x30c/0x990
schedule+0x48/0xc0
jbd2_log_wait_commit+0xd4/0x180
ext4_evict_inode+0x88/0x5c0
evict+0xf8/0x2a0
dispose_list+0x50/0x80
prune_icache_sb+0x6c/0x90
super_cache_scan+0x190/0x210
shrink_slab.part.15+0x22c/0x4c0
shrink_zone+0x288/0x3c0
do_try_to_free_pages+0x1dc/0x590
try_to_free_pages+0xdc/0x260
__alloc_pages_nodemask+0x72c/0xc90
alloc_pages_current+0xb4/0x1a0
page_table_alloc+0xc0/0x170
__pte_alloc+0x58/0x1f0
copy_page_range+0x4ec/0x950
copy_process.isra.5+0x15a0/0x1870
_do_fork+0xa8/0x4b0
ppc_clone+0x8/0xc
In the meanwhile, all processes exiting/forking are blocked almost
stalling the system.
This patch moves the threadgroup_change_begin from before
cgroup_fork() to just before cgroup_canfork(). There is no nee to
worry about threadgroup changes till the task is actually added to the
threadgroup. This avoids having to call reclaim with
cgroup_threadgroup_rwsem held.
tj: Subject and description edits.
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Acked-by: Zefan Li <lizefan@huawei.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: stable@vger.kernel.org # v4.2+
Signed-off-by: Tejun Heo <tj@kernel.org>
---
kernel/fork.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/fork.c b/kernel/fork.c
index 52e725d..aaf7823 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1404,7 +1404,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
p->real_start_time = ktime_get_boot_ns();
p->io_context = NULL;
p->audit_context = NULL;
- threadgroup_change_begin(current);
cgroup_fork(p);
#ifdef CONFIG_NUMA
p->mempolicy = mpol_dup(p->mempolicy);
@@ -1556,6 +1555,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
INIT_LIST_HEAD(&p->thread_group);
p->task_works = NULL;
+ threadgroup_change_begin(current);
/*
* Ensure that the cgroup subsystem policies allow the new process to be
* forked. It should be noted the the new process's css_set can be changed
@@ -1656,6 +1656,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
bad_fork_cancel_cgroup:
cgroup_cancel_fork(p);
bad_fork_free_pid:
+ threadgroup_change_end(current);
if (pid != &init_struct_pid)
free_pid(pid);
bad_fork_cleanup_thread:
@@ -1688,7 +1689,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
mpol_put(p->mempolicy);
bad_fork_cleanup_threadgroup_lock:
#endif
- threadgroup_change_end(current);
delayacct_tsk_free(p);
bad_fork_cleanup_count:
atomic_dec(&p->cred->user->processes);
--
2.7.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-11 23:47 ` Balbir Singh
0 siblings, 0 replies; 19+ messages in thread
From: Balbir Singh @ 2016-08-11 23:47 UTC (permalink / raw)
To: Tejun Heo; +Cc: Balbir Singh, cgroups, Oleg Nesterov, Andrew Morton, linux-mm
On Wed, Aug 10, 2016 at 03:43:06PM -0400, Tejun Heo wrote:
> Hello,
>
> Edited subject and description and applied the patch to
> cgroup/for-4.8-fixes w/ stable cc'd.
>
Thanks, Found a typo below.. small nit
> Thanks.
> ------ 8< ------
<snip>
> This patch moves the threadgroup_change_begin from before
> cgroup_fork() to just before cgroup_canfork(). There is no nee to
^ need
> worry about threadgroup changes till the task is actually added to the
> threadgroup. This avoids having to call reclaim with
> cgroup_threadgroup_rwsem held.
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM
@ 2016-08-11 23:47 ` Balbir Singh
0 siblings, 0 replies; 19+ messages in thread
From: Balbir Singh @ 2016-08-11 23:47 UTC (permalink / raw)
To: Tejun Heo
Cc: Balbir Singh, cgroups-u79uwXL29TY76Z2rM5mHXA, Oleg Nesterov,
Andrew Morton, linux-mm-Bw31MaZKKs3YtjvyW6yDsg
On Wed, Aug 10, 2016 at 03:43:06PM -0400, Tejun Heo wrote:
> Hello,
>
> Edited subject and description and applied the patch to
> cgroup/for-4.8-fixes w/ stable cc'd.
>
Thanks, Found a typo below.. small nit
> Thanks.
> ------ 8< ------
<snip>
> This patch moves the threadgroup_change_begin from before
> cgroup_fork() to just before cgroup_canfork(). There is no nee to
^ need
> worry about threadgroup changes till the task is actually added to the
> threadgroup. This avoids having to call reclaim with
> cgroup_threadgroup_rwsem held.
>
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2016-08-11 23:47 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-09 4:19 [RFC][PATCH] cgroup_threadgroup_rwsem - affects scalability and OOM Balbir Singh
2016-08-09 4:19 ` Balbir Singh
2016-08-09 6:29 ` Tejun Heo
2016-08-09 6:29 ` Tejun Heo
2016-08-09 7:02 ` Balbir Singh
2016-08-09 14:26 ` Tejun Heo
2016-08-09 14:26 ` Tejun Heo
2016-08-10 1:21 ` Balbir Singh
2016-08-10 1:21 ` Balbir Singh
2016-08-09 9:00 ` Zefan Li
2016-08-09 9:00 ` Zefan Li
2016-08-09 13:57 ` Balbir Singh
2016-08-10 1:31 ` Zefan Li
2016-08-10 1:31 ` Zefan Li
2016-08-09 18:09 ` Oleg Nesterov
2016-08-10 19:43 ` Tejun Heo
2016-08-10 19:43 ` Tejun Heo
2016-08-11 23:47 ` Balbir Singh
2016-08-11 23:47 ` Balbir Singh
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.