From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752972AbeDRIGA (ORCPT ); Wed, 18 Apr 2018 04:06:00 -0400 Received: from mx2.suse.de ([195.135.220.15]:39646 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752511AbeDRIF6 (ORCPT ); Wed, 18 Apr 2018 04:05:58 -0400 Date: Wed, 18 Apr 2018 10:05:55 +0200 From: Michal Hocko To: Yang Shi Cc: adobriyan@gmail.com, willy@infradead.org, mguzik@redhat.com, gorcunov@gmail.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [v4 PATCH] mm: introduce arg_lock to protect arg_start|end and env_start|end in mm_struct Message-ID: <20180418080555.GR17484@dhcp22.suse.cz> References: <1523730291-109696-1-git-send-email-yang.shi@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1523730291-109696-1-git-send-email-yang.shi@linux.alibaba.com> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun 15-04-18 02:24:51, Yang Shi wrote: > mmap_sem is on the hot path of kernel, and it very contended, but it is > abused too. It is used to protect arg_start|end and evn_start|end when > reading /proc/$PID/cmdline and /proc/$PID/environ, but it doesn't make > sense since those proc files just expect to read 4 values atomically and > not related to VM, they could be set to arbitrary values by C/R. > > And, the mmap_sem contention may cause unexpected issue like below: > > INFO: task ps:14018 blocked for more than 120 seconds. > Tainted: G E 4.9.79-009.ali3000.alios7.x86_64 #1 > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this > message. > ps D 0 14018 1 0x00000004 > ffff885582f84000 ffff885e8682f000 ffff880972943000 ffff885ebf499bc0 > ffff8828ee120000 ffffc900349bfca8 ffffffff817154d0 0000000000000040 > 00ffffff812f872a ffff885ebf499bc0 024000d000948300 ffff880972943000 > Call Trace: > [] ? __schedule+0x250/0x730 > [] schedule+0x36/0x80 > [] rwsem_down_read_failed+0xf0/0x150 > [] call_rwsem_down_read_failed+0x18/0x30 > [] down_read+0x20/0x40 > [] proc_pid_cmdline_read+0xd9/0x4e0 > [] ? do_filp_open+0xa5/0x100 > [] __vfs_read+0x37/0x150 > [] ? security_file_permission+0x9b/0xc0 > [] vfs_read+0x96/0x130 > [] SyS_read+0x55/0xc0 > [] entry_SYSCALL_64_fastpath+0x1a/0xc5 > > Both Alexey Dobriyan and Michal Hocko suggested to use dedicated lock > for them to mitigate the abuse of mmap_sem. > > So, introduce a new spinlock in mm_struct to protect the concurrent > access to arg_start|end, env_start|end and others, as well as replace > write map_sem to read to protect the race condition between prctl and > sys_brk which might break check_data_rlimit(), and makes prctl more > friendly to other VM operations. > > This patch just eliminates the abuse of mmap_sem, but it can't resolve the > above hung task warning completely since the later access_remote_vm() call > needs acquire mmap_sem. The mmap_sem scalability issue will be solved in the > future. > > Signed-off-by: Yang Shi > Cc: Alexey Dobriyan > Cc: Michal Hocko > Cc: Matthew Wilcox > Cc: Mateusz Guzik > Cc: Cyrill Gorcunov Yes, looks good to me. As mentioned in other emails prctl_set_mm_map really deserves a comment explaining why we are doing the down_read What about something like the following? " arg_lock protects concurent updates but we still need mmap_sem for read to exclude races with do_brk. " Acked-by: Michal Hocko > --- > v3 --> v4: > * Protected values update with down_read + spin_lock to prevent from race > condition between prctl and sys_brk and made prctl more friendly to VM > operations per Michal's suggestion > > v2 --> v3: > * Restored down_write in prctl syscall > * Elaborate the limitation of this patch suggested by Michal > * Protect those fields by the new lock except brk and start_brk per Michal's > suggestion > * Based off Cyrill's non PR_SET_MM_MAP oprations deprecation patch > (https://lkml.org/lkml/2018/4/5/541) > > v1 --> v2: > * Use spinlock instead of rwlock per Mattew's suggestion > * Replace down_write to down_read in prctl_set_mm (see commit log for details) > fs/proc/base.c | 8 ++++---- > include/linux/mm_types.h | 2 ++ > kernel/fork.c | 1 + > kernel/sys.c | 6 ++++-- > mm/init-mm.c | 1 + > 5 files changed, 12 insertions(+), 6 deletions(-) > > diff --git a/fs/proc/base.c b/fs/proc/base.c > index eafa39a..3551757 100644 > --- a/fs/proc/base.c > +++ b/fs/proc/base.c > @@ -239,12 +239,12 @@ static ssize_t proc_pid_cmdline_read(struct file *file, char __user *buf, > goto out_mmput; > } > > - down_read(&mm->mmap_sem); > + spin_lock(&mm->arg_lock); > arg_start = mm->arg_start; > arg_end = mm->arg_end; > env_start = mm->env_start; > env_end = mm->env_end; > - up_read(&mm->mmap_sem); > + spin_unlock(&mm->arg_lock); > > BUG_ON(arg_start > arg_end); > BUG_ON(env_start > env_end); > @@ -929,10 +929,10 @@ static ssize_t environ_read(struct file *file, char __user *buf, > if (!mmget_not_zero(mm)) > goto free; > > - down_read(&mm->mmap_sem); > + spin_lock(&mm->arg_lock); > env_start = mm->env_start; > env_end = mm->env_end; > - up_read(&mm->mmap_sem); > + spin_unlock(&mm->arg_lock); > > while (count > 0) { > size_t this_len, max_len; > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 2161234..49dd59e 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -413,6 +413,8 @@ struct mm_struct { > unsigned long exec_vm; /* VM_EXEC & ~VM_WRITE & ~VM_STACK */ > unsigned long stack_vm; /* VM_STACK */ > unsigned long def_flags; > + > + spinlock_t arg_lock; /* protect the below fields */ > unsigned long start_code, end_code, start_data, end_data; > unsigned long start_brk, brk, start_stack; > unsigned long arg_start, arg_end, env_start, env_end; > diff --git a/kernel/fork.c b/kernel/fork.c > index 242c8c9..295f903 100644 > --- a/kernel/fork.c > +++ b/kernel/fork.c > @@ -900,6 +900,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, > mm->pinned_vm = 0; > memset(&mm->rss_stat, 0, sizeof(mm->rss_stat)); > spin_lock_init(&mm->page_table_lock); > + spin_lock_init(&mm->arg_lock); > mm_init_cpumask(mm); > mm_init_aio(mm); > mm_init_owner(mm, p); > diff --git a/kernel/sys.c b/kernel/sys.c > index f16725e..0cc5a1c 100644 > --- a/kernel/sys.c > +++ b/kernel/sys.c > @@ -2011,7 +2011,7 @@ static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data > return error; > } > > - down_write(&mm->mmap_sem); > + down_read(&mm->mmap_sem); > > /* > * We don't validate if these members are pointing to > @@ -2025,6 +2025,7 @@ static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data > * to any problem in kernel itself > */ > > + spin_lock(&mm->arg_lock); > mm->start_code = prctl_map.start_code; > mm->end_code = prctl_map.end_code; > mm->start_data = prctl_map.start_data; > @@ -2036,6 +2037,7 @@ static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data > mm->arg_end = prctl_map.arg_end; > mm->env_start = prctl_map.env_start; > mm->env_end = prctl_map.env_end; > + spin_unlock(&mm->arg_lock); > > /* > * Note this update of @saved_auxv is lockless thus > @@ -2048,7 +2050,7 @@ static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data > if (prctl_map.auxv_size) > memcpy(mm->saved_auxv, user_auxv, sizeof(user_auxv)); > > - up_write(&mm->mmap_sem); > + up_read(&mm->mmap_sem); > return 0; > } > #endif /* CONFIG_CHECKPOINT_RESTORE */ > diff --git a/mm/init-mm.c b/mm/init-mm.c > index f94d5d1..f0179c9 100644 > --- a/mm/init-mm.c > +++ b/mm/init-mm.c > @@ -22,6 +22,7 @@ struct mm_struct init_mm = { > .mm_count = ATOMIC_INIT(1), > .mmap_sem = __RWSEM_INITIALIZER(init_mm.mmap_sem), > .page_table_lock = __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock), > + .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), > .mmlist = LIST_HEAD_INIT(init_mm.mmlist), > .user_ns = &init_user_ns, > INIT_MM_CONTEXT(init_mm) > -- > 1.8.3.1 -- Michal Hocko SUSE Labs