* Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree [not found] <20190109190306.rATpT%akpm@linux-foundation.org> @ 2019-01-25 16:56 ` Johannes Weiner 2019-01-25 17:24 ` Michal Hocko 0 siblings, 1 reply; 6+ messages in thread From: Johannes Weiner @ 2019-01-25 16:56 UTC (permalink / raw) To: akpm; +Cc: mm-commits, penguin-kernel, mhocko, cgroups, linux-mm, linux-kernel On Wed, Jan 09, 2019 at 11:03:06AM -0800, akpm@linux-foundation.org wrote: > > The patch titled > Subject: memcg: do not report racy no-eligible OOM tasks > has been added to the -mm tree. Its filename is > memcg-do-not-report-racy-no-eligible-oom-tasks.patch > > This patch should soon appear at > http://ozlabs.org/~akpm/mmots/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch > and later at > http://ozlabs.org/~akpm/mmotm/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch > > Before you just go and hit "reply", please: > a) Consider who else should be cc'ed > b) Prefer to cc a suitable mailing list as well > c) Ideally: find the original patch on the mailing list and do a > reply-to-all to that, adding suitable additional cc's > > *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** > > The -mm tree is included into linux-next and is updated > there every 3-4 working days > > ------------------------------------------------------ > From: Michal Hocko <mhocko@suse.com> > Subject: memcg: do not report racy no-eligible OOM tasks > > Tetsuo has reported [1] that a single process group memcg might easily > swamp the log with no-eligible oom victim reports due to race between the > memcg charge and oom_reaper > > Thread 1 Thread2 oom_reaper > try_charge try_charge > mem_cgroup_out_of_memory > mutex_lock(oom_lock) > mem_cgroup_out_of_memory > mutex_lock(oom_lock) > out_of_memory > select_bad_process > oom_kill_process(current) > wake_oom_reaper > oom_reap_task > MMF_OOM_SKIP->victim > mutex_unlock(oom_lock) > out_of_memory > select_bad_process # no task > > If Thread1 didn't race it would bail out from try_charge and force the > charge. We can achieve the same by checking tsk_is_oom_victim inside the > oom_lock and therefore close the race. > > [1] http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1e7f@i-love.sakura.ne.jp > Link: http://lkml.kernel.org/r/20190107143802.16847-3-mhocko@kernel.org > Signed-off-by: Michal Hocko <mhocko@suse.com> > Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> It looks like this problem is happening in production systems: https://www.spinics.net/lists/cgroups/msg21268.html where the threads don't exit because they are trapped writing out the oom messages to a slow console (running the reproducer from this email thread triggers the oom flooding). So IMO we should put this into 5.0 and add: Fixes: 29ef680ae7c2 ("memcg, oom: move out_of_memory back to the charge path") Fixes: 3100dab2aa09 ("mm: memcontrol: print proper OOM header when no eligible victim left") Cc: stable@kernel.org # 4.19+ > --- a/mm/memcontrol.c~memcg-do-not-report-racy-no-eligible-oom-tasks > +++ a/mm/memcontrol.c > @@ -1387,10 +1387,22 @@ static bool mem_cgroup_out_of_memory(str > .gfp_mask = gfp_mask, > .order = order, > }; > - bool ret; > + bool ret = true; Should this be false if skip the oom kill, btw? Either will result in a forced charge - false will do so right away, true will retry once and then trigger the victim check in try_charge(). It's just weird to return true when we didn't do what the caller asked us to do. > mutex_lock(&oom_lock); > + > + /* > + * multi-threaded tasks might race with oom_reaper and gain > + * MMF_OOM_SKIP before reaching out_of_memory which can lead > + * to out_of_memory failure if the task is the last one in > + * memcg which would be a false possitive failure reported > + */ > + if (tsk_is_oom_victim(current)) > + goto unlock; > + > ret = out_of_memory(&oc); > + > +unlock: > mutex_unlock(&oom_lock); > return ret; ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree 2019-01-25 16:56 ` + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree Johannes Weiner @ 2019-01-25 17:24 ` Michal Hocko 2019-01-25 18:33 ` Johannes Weiner 2019-01-28 18:26 ` Andrew Morton 0 siblings, 2 replies; 6+ messages in thread From: Michal Hocko @ 2019-01-25 17:24 UTC (permalink / raw) To: Johannes Weiner Cc: akpm, mm-commits, penguin-kernel, cgroups, linux-mm, linux-kernel On Fri 25-01-19 11:56:24, Johannes Weiner wrote: > On Wed, Jan 09, 2019 at 11:03:06AM -0800, akpm@linux-foundation.org wrote: > > > > The patch titled > > Subject: memcg: do not report racy no-eligible OOM tasks > > has been added to the -mm tree. Its filename is > > memcg-do-not-report-racy-no-eligible-oom-tasks.patch > > > > This patch should soon appear at > > http://ozlabs.org/~akpm/mmots/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch > > and later at > > http://ozlabs.org/~akpm/mmotm/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch > > > > Before you just go and hit "reply", please: > > a) Consider who else should be cc'ed > > b) Prefer to cc a suitable mailing list as well > > c) Ideally: find the original patch on the mailing list and do a > > reply-to-all to that, adding suitable additional cc's > > > > *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** > > > > The -mm tree is included into linux-next and is updated > > there every 3-4 working days > > > > ------------------------------------------------------ > > From: Michal Hocko <mhocko@suse.com> > > Subject: memcg: do not report racy no-eligible OOM tasks > > > > Tetsuo has reported [1] that a single process group memcg might easily > > swamp the log with no-eligible oom victim reports due to race between the > > memcg charge and oom_reaper > > > > Thread 1 Thread2 oom_reaper > > try_charge try_charge > > mem_cgroup_out_of_memory > > mutex_lock(oom_lock) > > mem_cgroup_out_of_memory > > mutex_lock(oom_lock) > > out_of_memory > > select_bad_process > > oom_kill_process(current) > > wake_oom_reaper > > oom_reap_task > > MMF_OOM_SKIP->victim > > mutex_unlock(oom_lock) > > out_of_memory > > select_bad_process # no task > > > > If Thread1 didn't race it would bail out from try_charge and force the > > charge. We can achieve the same by checking tsk_is_oom_victim inside the > > oom_lock and therefore close the race. > > > > [1] http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1e7f@i-love.sakura.ne.jp > > Link: http://lkml.kernel.org/r/20190107143802.16847-3-mhocko@kernel.org > > Signed-off-by: Michal Hocko <mhocko@suse.com> > > Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> > > Cc: Johannes Weiner <hannes@cmpxchg.org> > > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > > It looks like this problem is happening in production systems: > > https://www.spinics.net/lists/cgroups/msg21268.html > > where the threads don't exit because they are trapped writing out the > oom messages to a slow console (running the reproducer from this email > thread triggers the oom flooding). > > So IMO we should put this into 5.0 and add: Please note that Tetsuo has found out that this will not work with the CLONE_VM without CLONE_SIGHAND cases and his http://lkml.kernel.org/r/01370f70-e1f6-ebe4-b95e-0df21a0bc15e@i-love.sakura.ne.jp should handle this case as well. I've only had objections to the changelog but other than that the patch looked sensible to me. -- Michal Hocko SUSE Labs ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree 2019-01-25 17:24 ` Michal Hocko @ 2019-01-25 18:33 ` Johannes Weiner 2019-01-26 1:09 ` Tetsuo Handa 2019-01-28 18:26 ` Andrew Morton 1 sibling, 1 reply; 6+ messages in thread From: Johannes Weiner @ 2019-01-25 18:33 UTC (permalink / raw) To: Michal Hocko Cc: akpm, mm-commits, penguin-kernel, cgroups, linux-mm, linux-kernel On Fri, Jan 25, 2019 at 06:24:16PM +0100, Michal Hocko wrote: > On Fri 25-01-19 11:56:24, Johannes Weiner wrote: > > On Wed, Jan 09, 2019 at 11:03:06AM -0800, akpm@linux-foundation.org wrote: > > > > > > The patch titled > > > Subject: memcg: do not report racy no-eligible OOM tasks > > > has been added to the -mm tree. Its filename is > > > memcg-do-not-report-racy-no-eligible-oom-tasks.patch > > > > > > This patch should soon appear at > > > http://ozlabs.org/~akpm/mmots/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch > > > and later at > > > http://ozlabs.org/~akpm/mmotm/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch > > > > > > Before you just go and hit "reply", please: > > > a) Consider who else should be cc'ed > > > b) Prefer to cc a suitable mailing list as well > > > c) Ideally: find the original patch on the mailing list and do a > > > reply-to-all to that, adding suitable additional cc's > > > > > > *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** > > > > > > The -mm tree is included into linux-next and is updated > > > there every 3-4 working days > > > > > > ------------------------------------------------------ > > > From: Michal Hocko <mhocko@suse.com> > > > Subject: memcg: do not report racy no-eligible OOM tasks > > > > > > Tetsuo has reported [1] that a single process group memcg might easily > > > swamp the log with no-eligible oom victim reports due to race between the > > > memcg charge and oom_reaper > > > > > > Thread 1 Thread2 oom_reaper > > > try_charge try_charge > > > mem_cgroup_out_of_memory > > > mutex_lock(oom_lock) > > > mem_cgroup_out_of_memory > > > mutex_lock(oom_lock) > > > out_of_memory > > > select_bad_process > > > oom_kill_process(current) > > > wake_oom_reaper > > > oom_reap_task > > > MMF_OOM_SKIP->victim > > > mutex_unlock(oom_lock) > > > out_of_memory > > > select_bad_process # no task > > > > > > If Thread1 didn't race it would bail out from try_charge and force the > > > charge. We can achieve the same by checking tsk_is_oom_victim inside the > > > oom_lock and therefore close the race. > > > > > > [1] http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1e7f@i-love.sakura.ne.jp > > > Link: http://lkml.kernel.org/r/20190107143802.16847-3-mhocko@kernel.org > > > Signed-off-by: Michal Hocko <mhocko@suse.com> > > > Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> > > > Cc: Johannes Weiner <hannes@cmpxchg.org> > > > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > > > > It looks like this problem is happening in production systems: > > > > https://www.spinics.net/lists/cgroups/msg21268.html > > > > where the threads don't exit because they are trapped writing out the > > oom messages to a slow console (running the reproducer from this email > > thread triggers the oom flooding). > > > > So IMO we should put this into 5.0 and add: > > Please note that Tetsuo has found out that this will not work with the > CLONE_VM without CLONE_SIGHAND cases and his http://lkml.kernel.org/r/01370f70-e1f6-ebe4-b95e-0df21a0bc15e@i-love.sakura.ne.jp > should handle this case as well. I've only had objections to the > changelog but other than that the patch looked sensible to me. I see. Yeah that looks reasonable to me too. Tetsuo, could you include the Fixes: and CC: stable in your patch? ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree 2019-01-25 18:33 ` Johannes Weiner @ 2019-01-26 1:09 ` Tetsuo Handa 0 siblings, 0 replies; 6+ messages in thread From: Tetsuo Handa @ 2019-01-26 1:09 UTC (permalink / raw) To: Johannes Weiner, Michal Hocko Cc: akpm, mm-commits, cgroups, linux-mm, linux-kernel, Linus Torvalds On 2019/01/26 3:33, Johannes Weiner wrote: > On Fri, Jan 25, 2019 at 06:24:16PM +0100, Michal Hocko wrote: >> On Fri 25-01-19 11:56:24, Johannes Weiner wrote: >>> It looks like this problem is happening in production systems: >>> >>> https://www.spinics.net/lists/cgroups/msg21268.html >>> >>> where the threads don't exit because they are trapped writing out the >>> oom messages to a slow console (running the reproducer from this email >>> thread triggers the oom flooding). >>> >>> So IMO we should put this into 5.0 and add: >> >> Please note that Tetsuo has found out that this will not work with the >> CLONE_VM without CLONE_SIGHAND cases and his http://lkml.kernel.org/r/01370f70-e1f6-ebe4-b95e-0df21a0bc15e@i-love.sakura.ne.jp >> should handle this case as well. I've only had objections to the >> changelog but other than that the patch looked sensible to me. > > I see. Yeah that looks reasonable to me too. > > Tetsuo, could you include the Fixes: and CC: stable in your patch? > Andrew Morton is still offline. Do we want to ask Linus Torvalds? ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree 2019-01-25 17:24 ` Michal Hocko 2019-01-25 18:33 ` Johannes Weiner @ 2019-01-28 18:26 ` Andrew Morton 2019-01-28 18:43 ` Michal Hocko 1 sibling, 1 reply; 6+ messages in thread From: Andrew Morton @ 2019-01-28 18:26 UTC (permalink / raw) To: Michal Hocko Cc: Johannes Weiner, mm-commits, penguin-kernel, cgroups, linux-mm, linux-kernel On Fri, 25 Jan 2019 18:24:16 +0100 Michal Hocko <mhocko@kernel.org> wrote: > > > out_of_memory > > > select_bad_process # no task > > > > > > If Thread1 didn't race it would bail out from try_charge and force the > > > charge. We can achieve the same by checking tsk_is_oom_victim inside the > > > oom_lock and therefore close the race. > > > > > > [1] http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1e7f@i-love.sakura.ne.jp > > > Link: http://lkml.kernel.org/r/20190107143802.16847-3-mhocko@kernel.org > > > Signed-off-by: Michal Hocko <mhocko@suse.com> > > > Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> > > > Cc: Johannes Weiner <hannes@cmpxchg.org> > > > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > > > > It looks like this problem is happening in production systems: > > > > https://www.spinics.net/lists/cgroups/msg21268.html > > > > where the threads don't exit because they are trapped writing out the > > oom messages to a slow console (running the reproducer from this email > > thread triggers the oom flooding). > > > > So IMO we should put this into 5.0 and add: > > Please note that Tetsuo has found out that this will not work with the > CLONE_VM without CLONE_SIGHAND cases and his http://lkml.kernel.org/r/01370f70-e1f6-ebe4-b95e-0df21a0bc15e@i-love.sakura.ne.jp > should handle this case as well. I've only had objections to the > changelog but other than that the patch looked sensible to me. So I think you're saying that mm-oom-marks-all-killed-tasks-as-oom-victims.patch and memcg-do-not-report-racy-no-eligible-oom-tasks.patch should be dropped and that "[PATCH v2] memcg: killed threads should not invoke memcg OOM killer" should be redone with some changelog alterations and should be merged instead? ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree 2019-01-28 18:26 ` Andrew Morton @ 2019-01-28 18:43 ` Michal Hocko 0 siblings, 0 replies; 6+ messages in thread From: Michal Hocko @ 2019-01-28 18:43 UTC (permalink / raw) To: Andrew Morton Cc: Johannes Weiner, mm-commits, penguin-kernel, cgroups, linux-mm, linux-kernel On Mon 28-01-19 10:26:16, Andrew Morton wrote: > On Fri, 25 Jan 2019 18:24:16 +0100 Michal Hocko <mhocko@kernel.org> wrote: > > > > > out_of_memory > > > > select_bad_process # no task > > > > > > > > If Thread1 didn't race it would bail out from try_charge and force the > > > > charge. We can achieve the same by checking tsk_is_oom_victim inside the > > > > oom_lock and therefore close the race. > > > > > > > > [1] http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1e7f@i-love.sakura.ne.jp > > > > Link: http://lkml.kernel.org/r/20190107143802.16847-3-mhocko@kernel.org > > > > Signed-off-by: Michal Hocko <mhocko@suse.com> > > > > Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> > > > > Cc: Johannes Weiner <hannes@cmpxchg.org> > > > > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > > > > > > It looks like this problem is happening in production systems: > > > > > > https://www.spinics.net/lists/cgroups/msg21268.html > > > > > > where the threads don't exit because they are trapped writing out the > > > oom messages to a slow console (running the reproducer from this email > > > thread triggers the oom flooding). > > > > > > So IMO we should put this into 5.0 and add: > > > > Please note that Tetsuo has found out that this will not work with the > > CLONE_VM without CLONE_SIGHAND cases and his http://lkml.kernel.org/r/01370f70-e1f6-ebe4-b95e-0df21a0bc15e@i-love.sakura.ne.jp > > should handle this case as well. I've only had objections to the > > changelog but other than that the patch looked sensible to me. > > So I think you're saying that > > mm-oom-marks-all-killed-tasks-as-oom-victims.patch > and > memcg-do-not-report-racy-no-eligible-oom-tasks.patch > > should be dropped and that "[PATCH v2] memcg: killed threads should not > invoke memcg OOM killer" should be redone with some changelog > alterations and should be merged instead? Yup. -- Michal Hocko SUSE Labs ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2019-01-28 18:43 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <20190109190306.rATpT%akpm@linux-foundation.org> 2019-01-25 16:56 ` + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree Johannes Weiner 2019-01-25 17:24 ` Michal Hocko 2019-01-25 18:33 ` Johannes Weiner 2019-01-26 1:09 ` Tetsuo Handa 2019-01-28 18:26 ` Andrew Morton 2019-01-28 18:43 ` Michal Hocko
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).