From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750945AbdEWKcZ (ORCPT ); Tue, 23 May 2017 06:32:25 -0400 Received: from forwardcorp1o.cmail.yandex.net ([37.9.109.47]:35108 "EHLO forwardcorp1o.cmail.yandex.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750885AbdEWKcV (ORCPT ); Tue, 23 May 2017 06:32:21 -0400 Authentication-Results: smtpcorp1o.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Subject: Re: [PATCH] mm/oom_kill: count global and memory cgroup oom kills To: David Rientjes Cc: Roman Guschin , linux-mm@kvack.org, Andrew Morton , Tejun Heo , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Vlastimil Babka , Michal Hocko , hannes@cmpxchg.org References: <149520375057.74196.2843113275800730971.stgit@buzz> From: Konstantin Khlebnikov Message-ID: <0f67046d-cdf6-1264-26f6-11c82978c621@yandex-team.ru> Date: Tue, 23 May 2017 13:32:17 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 23.05.2017 10:49, David Rientjes wrote: > On Mon, 22 May 2017, Konstantin Khlebnikov wrote: > >> Nope, they are different. I think we should rephase documentation somehow >> >> low - count of reclaims below low level >> high - count of post-allocation reclaims above high level >> max - count of direct reclaims >> oom - count of failed direct reclaims >> oom_kill - count of oom killer invocations and killed processes >> > > In our kernel, we've maintained counts of oom kills per memcg for years as > part of memory.oom_control for memcg v1, but we've also found it helpful > to complement that with another count that specifies the number of > processes oom killed that were attached to that exact memcg. > > In your patch, oom_kill in memory.oom_control specifies that number of oom > events that resulted in an oom kill of a process from that hierarchy, but > not the number of processes killed from a specific memcg (the difference > between oc->memcg and mem_cgroup_from_task(victim)). Not sure if you > would also find it helpful. > This is worth addition. Let's call it "oom_victim" for short. It allows to locate leaky part if they are spread over sub-containers within common limit. But doesn't tell which limit caused this kill. For hierarchical limits this might be not so easy. I think oom_kill better suits for automatic actions - restart affected hierarchy, increase limits, e.t.c. But oom_victim allows to determine container affected by global oom killer. So, probably it's worth to merge them together and increment oom_kill by global killer for victim memcg: if (!is_memcg_oom(oc)) { count_vm_event(OOM_KILL); mem_cgroup_count_vm_event(mm, OOM_KILL); } else mem_cgroup_event(oc->memcg, OOM_KILL);