From mboxrd@z Thu Jan 1 00:00:00 1970 From: Michal Hocko Subject: Re: [RFC 1/6] mm, page_alloc: fix more premature OOM due to race with cpuset update Date: Thu, 18 May 2017 11:08:47 +0200 Message-ID: <20170518090846.GD25462@dhcp22.suse.cz> References: <20170517092042.GH18247@dhcp22.suse.cz> <20170517140501.GM18247@dhcp22.suse.cz> <20170517145645.GO18247@dhcp22.suse.cz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: Sender: linux-api-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Christoph Lameter Cc: Vlastimil Babka , linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Li Zefan , Mel Gorman , David Rientjes , Hugh Dickins , Andrea Arcangeli , Anshuman Khandual , "Kirill A. Shutemov" , linux-api-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: linux-api@vger.kernel.org On Wed 17-05-17 10:25:09, Cristopher Lameter wrote: > On Wed, 17 May 2017, Michal Hocko wrote: > > > > If you have screwy things like static mbinds in there then you are > > > hopelessly lost anyways. You may have moved the process to another set > > > of nodes but the static bindings may refer to a node no longer > > > available. Thus the OOM is legitimate. > > > > The point is that you do _not_ want such a process to trigger the OOM > > because it can cause other processes being killed. > > Nope. The OOM in a cpuset gets the process doing the alloc killed. Or what > that changed? > > At this point you have messed up royally and nothing is going to rescue > you anyways. OOM or not does not matter anymore. The app will fail. Not really. If you can trick the system to _think_ that the intersection between mempolicy and the cpuset is empty then the OOM killer might trigger an innocent task rather than the one which tricked it into that situation. -- Michal Hocko SUSE Labs