From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753607AbcHVAHQ (ORCPT ); Sun, 21 Aug 2016 20:07:16 -0400 Received: from LGEAMRELO13.lge.com ([156.147.23.53]:51538 "EHLO lgeamrelo13.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753285AbcHVAHO (ORCPT ); Sun, 21 Aug 2016 20:07:14 -0400 X-Original-SENDERIP: 156.147.1.126 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.98.204 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Mon, 22 Aug 2016 09:07:45 +0900 From: Minchan Kim To: Michal Hocko CC: Sonny Rao , Jann Horn , Robert Foss , , Andrew Morton , Vlastimil Babka , Konstantin Khlebnikov , Hugh Dickins , Naoya Horiguchi , John Stultz , , , Johannes Weiner , Kees Cook , Al Viro , Cyrill Gorcunov , Robin Humble , David Rientjes , , Janis Danisevskis , , Alexey Dobriyan , "Kirill A. Shutemov" , , , "linux-kernel@vger.kernel.org" , Ben Zhang , Bryan Freed , Filipe Brandenburger , Mateusz Guzik Subject: Re: [PACTH v2 0/3] Implement /proc//totmaps Message-ID: <20160822000745.GA21441@bbox> References: <336532d0-57f2-a430-d195-13c13f70e25a@collabora.com> <20160817082200.GA10547@dhcp22.suse.cz> <20160817093125.GA27782@pc.thejh.net> <20160817130320.GC20703@dhcp22.suse.cz> <20160818074433.GC30162@dhcp22.suse.cz> <20160818180104.GS30162@dhcp22.suse.cz> <20160819022634.GA14206@bbox> <20160819080532.GC32619@dhcp22.suse.cz> MIME-Version: 1.0 In-Reply-To: <20160819080532.GC32619@dhcp22.suse.cz> User-Agent: Mutt/1.5.21 (2010-09-15) X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB07/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/08/22 09:07:10, Serialize by Router on LGEKRMHUB07/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2016/08/22 09:07:10, Serialize complete at 2016/08/22 09:07:10 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 19, 2016 at 10:05:32AM +0200, Michal Hocko wrote: > On Fri 19-08-16 11:26:34, Minchan Kim wrote: > > Hi Michal, > > > > On Thu, Aug 18, 2016 at 08:01:04PM +0200, Michal Hocko wrote: > > > On Thu 18-08-16 10:47:57, Sonny Rao wrote: > > > > On Thu, Aug 18, 2016 at 12:44 AM, Michal Hocko wrote: > > > > > On Wed 17-08-16 11:57:56, Sonny Rao wrote: > > > [...] > > > > >> 2) User space OOM handling -- we'd rather do a more graceful shutdown > > > > >> than let the kernel's OOM killer activate and need to gather this > > > > >> information and we'd like to be able to get this information to make > > > > >> the decision much faster than 400ms > > > > > > > > > > Global OOM handling in userspace is really dubious if you ask me. I > > > > > understand you want something better than SIGKILL and in fact this is > > > > > already possible with memory cgroup controller (btw. memcg will give > > > > > you a cheap access to rss, amount of shared, swapped out memory as > > > > > well). Anyway if you are getting close to the OOM your system will most > > > > > probably be really busy and chances are that also reading your new file > > > > > will take much more time. I am also not quite sure how is pss useful for > > > > > oom decisions. > > > > > > > > I mentioned it before, but based on experience RSS just isn't good > > > > enough -- there's too much sharing going on in our use case to make > > > > the correct decision based on RSS. If RSS were good enough, simply > > > > put, this patch wouldn't exist. > > > > > > But that doesn't answer my question, I am afraid. So how exactly do you > > > use pss for oom decisions? > > > > My case is not for OOM decision but I agree it would be great if we can get > > *fast* smap summary information. > > > > PSS is really great tool to figure out how processes consume memory > > more exactly rather than RSS. We have been used it for monitoring > > of memory for per-process. Although it is not used for OOM decision, > > it would be great if it is speed up because we don't want to spend > > many CPU time for just monitoring. > > > > For our usecase, we don't need AnonHugePages, ShmemPmdMapped, Shared_Hugetlb, > > Private_Hugetlb, KernelPageSize, MMUPageSize because we never enable THP and > > hugetlb. Additionally, Locked can be known via vma flags so we don't need it, > > either. Even, we don't need address range for just monitoring when we don't > > investigate in detail. > > > > Although they are not severe overhead, why does it emit the useless > > information? Even bloat day by day. :( With that, userspace tools should > > spend more time to parse which is pointless. > > So far it doesn't really seem that the parsing is the biggest problem. > The major cycles killer is the output formatting and that doesn't sound I cannot understand how kernel space is more expensive. Hmm. I tested your test program on my machine. #!/bin/sh ./smap_test & pid=$! for i in $(seq 25) do cat /proc/$pid/smaps > /dev/null done kill $pid root@bbox:/home/barrios/test/smap# time ./s_v.sh pid:21925 real 0m3.365s user 0m0.031s sys 0m3.046s vs. #!/bin/sh ./smap_test & pid=$! for i in $(seq 25) do awk '/^Rss/{rss+=$2} /^Pss/{pss+=$2} END {}' \ /proc/$pid/smaps done kill $pid root@bbox:/home/barrios/test/smap# time ./s.sh pid:21973 real 0m17.812s user 0m12.612s sys 0m5.187s perf report says 39.56% awk gawk [.] dfaexec 7.61% awk [kernel.kallsyms] [k] format_decode 6.37% awk gawk [.] avoid_dfa 5.85% awk gawk [.] interpret 5.69% awk [kernel.kallsyms] [k] __memcpy 4.37% awk [kernel.kallsyms] [k] vsnprintf 2.69% awk [kernel.kallsyms] [k] number.isra.13 2.10% awk gawk [.] research 1.91% awk gawk [.] 0x00000000000351d0 1.49% awk gawk [.] free_wstr 1.27% awk gawk [.] unref 1.19% awk gawk [.] reset_record 0.95% awk gawk [.] set_record 0.95% awk gawk [.] get_field 0.94% awk [kernel.kallsyms] [k] show_smap Parsing is much expensive than kernel. Could you retest your test program? > like a problem we are not able to address. And I would even argue that > we want to address it in a generic way as much as possible. Sure. What solution do you think as generic way? > > > Having said that, I'm not fan of creating new stat knob for that, either. > > How about appending summary information in the end of smap? > > So, monitoring users can just open the file and lseek to the (end - 1) and > > read the summary only. > > That might confuse existing parsers. Besides that we already have > /proc//statm which gives cumulative numbers already. I am not sure > how often it is used and whether the pte walk is too expensive for > existing users but that should be explored and evaluated before a new > file is created. > > The /proc became a dump of everything people found interesting just > because we were to easy to allow those additions. Do not repeat those > mistakes, please! > -- > Michal Hocko > SUSE Labs