From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755787AbdEYIou (ORCPT ); Thu, 25 May 2017 04:44:50 -0400 Received: from forwardcorp1h.cmail.yandex.net ([87.250.230.216]:55923 "EHLO forwardcorp1h.cmail.yandex.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755544AbdEYIop (ORCPT ); Thu, 25 May 2017 04:44:45 -0400 Authentication-Results: smtpcorp1p.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Subject: Re: [PATCH] mm/oom_kill: count global and memory cgroup oom kills To: David Rientjes Cc: Roman Guschin , linux-mm@kvack.org, Andrew Morton , Tejun Heo , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Vlastimil Babka , Michal Hocko , hannes@cmpxchg.org References: <149520375057.74196.2843113275800730971.stgit@buzz> <0f67046d-cdf6-1264-26f6-11c82978c621@yandex-team.ru> From: Konstantin Khlebnikov Message-ID: Date: Thu, 25 May 2017 11:44:41 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 24.05.2017 23:43, David Rientjes wrote: > On Tue, 23 May 2017, Konstantin Khlebnikov wrote: > >> This is worth addition. Let's call it "oom_victim" for short. >> >> It allows to locate leaky part if they are spread over sub-containers within >> common limit. >> But doesn't tell which limit caused this kill. For hierarchical limits this >> might be not so easy. >> >> I think oom_kill better suits for automatic actions - restart affected >> hierarchy, increase limits, e.t.c. >> But oom_victim allows to determine container affected by global oom killer. >> >> So, probably it's worth to merge them together and increment oom_kill by >> global killer for victim memcg: >> >> if (!is_memcg_oom(oc)) { >> count_vm_event(OOM_KILL); >> mem_cgroup_count_vm_event(mm, OOM_KILL); >> } else >> mem_cgroup_event(oc->memcg, OOM_KILL); >> > > Our complete solution is that we have a complementary > memory.oom_kill_control that allows users to register for eventfd(2) > notification when the kernel oom killer kills a victim, but this is > because we have had complete support for userspace oom handling for years. > When read, it exports three classes of information: > > - the "total" (hierarchical) and "local" (memcg specific) number of oom > kills for system oom conditions (overcommit), > > - the "total" and "local" number of oom kills for memcg oom conditions, > and > > - the total number of processes in the hierarchy where an oom victim was > reaped successfully and unsuccessfully. > > One benefit of this is that it prevents us from having to scrape the > kernel log for oom events which has been troublesome in the past, but > userspace can easily do so when the eventfd triggers for the kill > notification. > Ok. I've decided to simplify this thing and count kills to cgroup where task lived. Like page faults. And show in vmstat total count of any kind of kills. Simply: count_vm_event(OOM_KILL); mem_cgroup_count_vm_event(mm, OOM_KILL);