From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754152AbcEZOmU (ORCPT ); Thu, 26 May 2016 10:42:20 -0400 Received: from www262.sakura.ne.jp ([202.181.97.72]:22915 "EHLO www262.sakura.ne.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754095AbcEZOmT (ORCPT ); Thu, 26 May 2016 10:42:19 -0400 To: mhocko@kernel.org, linux-mm@kvack.org Cc: rientjes@google.com, oleg@redhat.com, vdavydov@parallels.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 6/6] mm, oom: fortify task_will_free_mem From: Tetsuo Handa References: <1464266415-15558-1-git-send-email-mhocko@kernel.org> <1464266415-15558-7-git-send-email-mhocko@kernel.org> <201605262311.FFF64092.FFQVtOLOOMJSFH@I-love.SAKURA.ne.jp> <20160526142317.GC23675@dhcp22.suse.cz> In-Reply-To: <20160526142317.GC23675@dhcp22.suse.cz> Message-Id: <201605262341.GFE48463.OOtLFFMQSVFHOJ@I-love.SAKURA.ne.jp> X-Mailer: Winbiff [Version 2.51 PL2] X-Accept-Language: ja,en,zh Date: Thu, 26 May 2016 23:41:54 +0900 Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Michal Hocko wrote: > +/* > + * Checks whether the given task is dying or exiting and likely to > + * release its address space. This means that all threads and processes > + * sharing the same mm have to be killed or exiting. > + */ > +static inline bool task_will_free_mem(struct task_struct *task) > +{ > + struct mm_struct *mm = NULL; > + struct task_struct *p; > + bool ret = false; If atomic_read(&p->mm->mm_users) <= get_nr_threads(p), this returns "false". According to previous version, I think this is "bool ret = true;". > + > + /* > + * If the process has passed exit_mm we have to skip it because > + * we have lost a link to other tasks sharing this mm, we do not > + * have anything to reap and the task might then get stuck waiting > + * for parent as zombie and we do not want it to hold TIF_MEMDIE > + */ > + p = find_lock_task_mm(task); > + if (!p) > + return false; > + > + if (!__task_will_free_mem(p)) { > + task_unlock(p); > + return false; > + } > + > + /* > + * Check whether there are other processes sharing the mm - they all have > + * to be killed or exiting. > + */ > + if (atomic_read(&p->mm->mm_users) > get_nr_threads(p)) { > + mm = p->mm; > + /* pin the mm to not get freed and reused */ > + atomic_inc(&mm->mm_count); > + } > + task_unlock(p); > + > + if (mm) { > + rcu_read_lock(); > + for_each_process(p) { > + bool vfork; > + > + /* > + * skip over vforked tasks because they are mostly > + * independent and will drop the mm soon > + */ > + task_lock(p); > + vfork = p->vfork_done; > + task_unlock(p); > + if (vfork) > + continue; > + > + ret = __task_will_free_mem(p); > + if (!ret) > + break; > + } > + rcu_read_unlock(); > + mmdrop(mm); > + } > + > + return ret; > +}