From mboxrd@z Thu Jan 1 00:00:00 1970 From: Michal Hocko Subject: Re: [RFC 1/6] mm, page_alloc: fix more premature OOM due to race with cpuset update Date: Wed, 17 May 2017 16:56:45 +0200 Message-ID: <20170517145645.GO18247@dhcp22.suse.cz> References: <20170517092042.GH18247@dhcp22.suse.cz> <20170517140501.GM18247@dhcp22.suse.cz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: Sender: linux-api-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Christoph Lameter Cc: Vlastimil Babka , linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Li Zefan , Mel Gorman , David Rientjes , Hugh Dickins , Andrea Arcangeli , Anshuman Khandual , "Kirill A. Shutemov" , linux-api-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: linux-api@vger.kernel.org On Wed 17-05-17 09:48:25, Cristopher Lameter wrote: > On Wed, 17 May 2017, Michal Hocko wrote: > > > > > So how are you going to distinguish VM_FAULT_OOM from an empty mempolicy > > > > case in a raceless way? > > > > > > You dont have to do that if you do not create an empty mempolicy in the > > > first place. The current kernel code avoids that by first allowing access > > > to the new set of nodes and removing the old ones from the set when done. > > > > which is racy and as Vlastimil pointed out. If we simply fail such an > > allocation the failure will go up the call chain until we hit the OOM > > killer due to VM_FAULT_OOM. How would you want to handle that? > > The race is where? If you expand the node set during the move of the > application then you are safe in terms of the legacy apps that did not > include static bindings. I am pretty sure it is describe in those changelogs and I won't repeat it here. > If you have screwy things like static mbinds in there then you are > hopelessly lost anyways. You may have moved the process to another set > of nodes but the static bindings may refer to a node no longer > available. Thus the OOM is legitimate. The point is that you do _not_ want such a process to trigger the OOM because it can cause other processes being killed. > At least a user space app could inspect > the situation and come up with custom ways of dealing with the mess. I do not really see how would this help to prevent a malicious user from playing tricks. -- Michal Hocko SUSE Labs