From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F26C7C433DB for ; Fri, 5 Feb 2021 21:28:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7913264FC8 for ; Fri, 5 Feb 2021 21:28:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7913264FC8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D6F7E6B006C; Fri, 5 Feb 2021 16:28:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CF8ED6B006E; Fri, 5 Feb 2021 16:28:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0D3D6B0070; Fri, 5 Feb 2021 16:28:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0022.hostedemail.com [216.40.44.22]) by kanga.kvack.org (Postfix) with ESMTP id A9E0B6B006C for ; Fri, 5 Feb 2021 16:28:05 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 736F28249980 for ; Fri, 5 Feb 2021 21:28:05 +0000 (UTC) X-FDA: 77785502130.07.force08_3716c2c275e8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id 5A9531803F9AE for ; Fri, 5 Feb 2021 21:28:05 +0000 (UTC) X-HE-Tag: force08_3716c2c275e8 X-Filterd-Recvd-Size: 7596 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 5 Feb 2021 21:28:04 +0000 (UTC) Received: by mail-pl1-f171.google.com with SMTP id d13so4238414plg.0 for ; Fri, 05 Feb 2021 13:28:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=aQuIxt4qvPi4mU+J/H3ohYCdk1Zq8+c6bt8dGfJG8Pw=; b=M9KAfYqjvDgDhmLxL0xXu4CVhni6pxxAvxd2v/ELKrGswp12IARwV/uqMEHWiEwctn Izw+OFc+zwt+12exGttKFso8ieX4K6alWxxAopLLyM874U+RevfvNJ7utSQcOvxDFuCM MOcEoB28MErvU5zMoJetH3EthkYgUxs8jtvQIbsnBkxhUVJoJz9ZMjU/NB5qrIgn1Izg A3BHLhozns7v1shEh7o0iFytEA8rbyw5DbiRkLFMaU+iQFOQifHGKy3YPZpn2Wm0lKxx fFoV6C8KpgGiBmbHthZppBN1+I5vi/QVPoWQ4BGhs12I6AaqMnQylfZndH2csiUinspA vLVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=aQuIxt4qvPi4mU+J/H3ohYCdk1Zq8+c6bt8dGfJG8Pw=; b=kx+nR7QHJakvaRunfuX2xW9AflBHivqF5WNh3dE9y8DANgF0i2pgWvl7DKydx7ahba I8Q0iSDuvtyf585qdzgtRR76e1VsX4kHbyMUVB4Z/OQ1sOcs2vTsPcknKKZtHZjTWW4O lnoM7eHY608mTmYKIBS2jpUfIcY7+ihgxwdLJFdaYh6LsCyw0jb//cbj2Chfi3ovF9qU P9He6WoOc2H7oI4eU6O5/7QpyjMtZyc5NZuHcuwbKa7ALeVjfbNN0hu5A+1+TSlDhvm9 LFH4hiHiUfGc2RmMawU/v+Ukvi7Cei3+SmoCPLVDzf3dmK5bW1oXFb6jVEQwcB9P46n8 L/jA== X-Gm-Message-State: AOAM532G5GWOKkzzHvWUJJbdxyIbbeELMZ9Ry+T/dKCouIcqK0xfyeFJ +IQWD+W4H5aP3/6AG7z5Cww= X-Google-Smtp-Source: ABdhPJxWa/XKCsQ50GPbzWeDivRtcPtlEhfVXNu8HGKJCYoRqfZo4ZqWMGJJwtOseHuRLIQJpaVWvQ== X-Received: by 2002:a17:90b:4ad2:: with SMTP id mh18mr5749448pjb.137.1612560483852; Fri, 05 Feb 2021 13:28:03 -0800 (PST) Received: from google.com ([2620:15c:211:201:708b:34cf:3e70:176d]) by smtp.gmail.com with ESMTPSA id k31sm12256798pgi.5.2021.02.05.13.28.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 13:28:02 -0800 (PST) Date: Fri, 5 Feb 2021 13:28:01 -0800 From: Minchan Kim To: John Hubbard Cc: Andrew Morton , gregkh@linuxfoundation.org, surenb@google.com, joaodias@google.com, LKML , linux-mm Subject: Re: [PATCH] mm: cma: support sysfs Message-ID: References: <87d7ec1f-d892-0491-a2de-3d0feecca647@nvidia.com> <71c4ce84-8be7-49e2-90bd-348762b320b4@nvidia.com> <34110c61-9826-4cbe-8cd4-76f5e7612dbd@nvidia.com> <269689b7-3b6d-55dc-9044-fbf2984089ab@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <269689b7-3b6d-55dc-9044-fbf2984089ab@nvidia.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Feb 05, 2021 at 12:25:52PM -0800, John Hubbard wrote: > On 2/5/21 8:15 AM, Minchan Kim wrote: > ... > > > Yes, approximately. I was wondering if this would suffice at least as a baseline: > > > > > > cma_alloc_success 125 > > > cma_alloc_failure 25 > > > > IMO, regardless of the my patch, it would be good to have such statistics > > in that CMA was born to replace carved out memory with dynamic allocation > > ideally for memory efficiency ideally so failure should regard critical > > so admin could notice it how the system is hurt. > > Right. So CMA failures are useful for the admin to see, understood. > > > > > Anyway, it's not enough for me and orthgonal with my goal. > > > > OK. But...what *is* your goal, and why is this useless (that's what > orthogonal really means here) for your goal? As I mentioned, the goal is to monitor the failure from each of CMA since they have each own purpose. Let's have an example. System has 5 CMA area and each CMA is associated with each user scenario. They have exclusive CMA area to avoid fragmentation problem. CMA-1 depends on bluetooh CMA-2 depends on WIFI CMA-3 depends on sensor-A CMA-4 depends on sensor-B CMA-5 depends on sensor-C With this, we could catch which module was affected but with global failure, I couldn't find who was affected. > > Also, would you be willing to try out something simple first, > such as providing indication that cma is active and it's overall success > rate, like this: > > /proc/vmstat: > > cma_alloc_success 125 > cma_alloc_failure 25 > > ...or is the only way to provide the more detailed items, complete with > per-CMA details, in a non-debugfs location? > > > > > > > > ...and then, to see if more is needed, some questions: > > > > > > a) Do you know of an upper bound on how many cma areas there can be > > > (I think Matthew also asked that)? > > > > There is no upper bound since it's configurable. > > > > OK, thanks,so that pretty much rules out putting per-cma details into > anything other than a directory or something like it. > > > > > > > b) Is tracking the cma area really as valuable as other possibilities? We can put > > > "a few" to "several" items here, so really want to get your very favorite bits of > > > information in. If, for example, there can be *lots* of cma areas, then maybe tracking > > > > At this moment, allocation/failure for each CMA area since they have > > particular own usecase, which makes me easy to keep which module will > > be affected. I think it is very useful per-CMA statistics as minimum > > code change so I want to enable it by default under CONFIG_CMA && CONFIG_SYSFS. > > > > > by a range of allocation sizes is better... > > > > I takes your suggestion something like this. > > > > [alloc_range] could be order or range by interval > > > > /sys/kernel/mm/cma/cma-A/[alloc_range]/success > > /sys/kernel/mm/cma/cma-A/[alloc_range]/fail > > .. > > .. > > /sys/kernel/mm/cma/cma-Z/[alloc_range]/success > > /sys/kernel/mm/cma/cma-Z/[alloc_range]/fail > > Actually, I meant, "ranges instead of cma areas", like this: > > / / / / ... > / / > The idea is that knowing the allocation sizes that succeeded > and failed is maybe even more interesting and useful than > knowing the cma area that contains them. Understand your point but it would make hard to find who was affected by the failure. That's why I suggested to have your suggestion under additional config since per-cma metric with simple sucess/failure are enough. > > > > > I agree it would be also useful but I'd like to enable it under > > CONFIG_CMA_SYSFS_ALLOC_RANGE as separate patchset. > > > > I will stop harassing you very soon, just want to bottom out on > understanding the real goals first. :) > I hope my example makes the goal more clear for you.