From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755462AbdIGPD2 (ORCPT ); Thu, 7 Sep 2017 11:03:28 -0400 Received: from resqmta-ch2-02v.sys.comcast.net ([69.252.207.34]:41080 "EHLO resqmta-ch2-02v.sys.comcast.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754199AbdIGPD0 (ORCPT ); Thu, 7 Sep 2017 11:03:26 -0400 Date: Thu, 7 Sep 2017 10:03:24 -0500 (CDT) From: Christopher Lameter X-X-Sender: cl@nuc-kabylake To: Roman Gushchin cc: David Rientjes , nzimmer@sgi.com, holt@sgi.com, Michal Hocko , linux-mm@kvack.org, Vladimir Davydov , Johannes Weiner , Tetsuo Handa , Andrew Morton , Tejun Heo , kernel-team@fb.com, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, sivanich@sgi.com Subject: Re: [v7 5/5] mm, oom: cgroup v2 mount option to disable cgroup-aware OOM killer In-Reply-To: <20170907145239.GA19022@castle.DHCP.thefacebook.com> Message-ID: References: <20170904142108.7165-1-guro@fb.com> <20170904142108.7165-6-guro@fb.com> <20170905134412.qdvqcfhvbdzmarna@dhcp22.suse.cz> <20170905143021.GA28599@castle.dhcp.TheFacebook.com> <20170905151251.luh4wogjd3msfqgf@dhcp22.suse.cz> <20170905191609.GA19687@castle.dhcp.TheFacebook.com> <20170906084242.l4rcx6n3hdzxvil6@dhcp22.suse.cz> <20170906174043.GA12579@castle.DHCP.thefacebook.com> <20170907145239.GA19022@castle.DHCP.thefacebook.com> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-CMAE-Envelope: MS4wfFaO1HkzUMoZE338ZJzM/Bzc6aPROP1NcozS9KMIVL1N3twr6PhDQk+ajAmenPs5vB8OLC6dAQ97WruYyQRDNTBpvdbhQ0f7KHt04PiyBF/IdqqUYT6+ K/Qxq5zI9ZeETbAjnNEhtqFs33428ee2a06z/59vN1MPAU4Au7f4iGs0O4FP9fqNIAtdoAhDhmwudIS9g0UkCwAZVPh39jy4MtLPgyUsTzxqWaDiIC5TRb2f YsO95jczGbR/GC02/sojX9JK8Z2wqPD7uWvyVm0gUaR8IvUx2D0uGcdSDnfOt7e1T849UglYHYx/x1hmNxOlAgq8/CQQ0TT4xh/I2aRQrbk7KgY/qqwnWBcd RUN0LgqydhpYfxPjQ5tsrCBb2MbFEC+1DqAQvz3HNODrGNmkdEjiuSXdi1Qbc8lL6q9oT83WC6LOTNExV/s8VSBDCKYuOmsRH0/TgpRU/fH6v5glsBlnvaGd zqZh4Pwn0l6Wn3z/XVxt4e4dEDfADy43PFbEJl9VgXgzjyar00yytTDIfzCLUkyyxgckIP021i7K1+wf8kAkArD9Hq8tWY34Z+SCOsWl6JCa4GcEsbPvY2+c wxHWdROyQyL2RbrxMECUpQ2Yk8JxWrMSlLYJVxLjPNhNjw== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 7 Sep 2017, Roman Gushchin wrote: > > Really? From what I know and worked on way back when: The reason was to be > > able to contain the affected application in a cpuset. Multiple apps may > > have been running in multiple cpusets on a large NUMA machine and the OOM > > condition in one cpuset should not affect the other. It also helped to > > isolate the application behavior causing the oom in numerous cases. > > > > Doesnt this requirement transfer to cgroups in the same way? > > We have per-node memory stats and plan to use them during the OOM victim > selection. Hopefully it can help. One of the OOM causes could be that memory was restricted to a certain node set. Killing the allocating task is (was?) default behavior in that case so that the task that has the restrictions is killed. Not any task that may not have the restrictions and woiuld not experience OOM.