linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: Minchan Kim <minchan@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	John Dias <joaodias@google.com>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>
Subject: Re: [PATCH] mm: cma: support sysfs
Date: Thu, 4 Feb 2021 16:34:17 -0800	[thread overview]
Message-ID: <96bc11de-fe47-c7d3-6e61-5a5a5b6d2f4c@nvidia.com> (raw)
In-Reply-To: <9900858e-4d9b-5111-e695-fd2bb7463af9@nvidia.com>

On 2/4/21 4:25 PM, John Hubbard wrote:
> On 2/4/21 3:45 PM, Suren Baghdasaryan wrote:
> ...
>>>>>> 2) The overall CMA allocation attempts/failures (first two items above) seem
>>>>>> an odd pair of things to track. Maybe that is what was easy to track, but I'd
>>>>>> vote for just omitting them.
>>>>>
>>>>> Then, how to know how often CMA API failed?
>>>>
>>>> Why would you even need to know that, *in addition* to knowing specific
>>>> page allocation numbers that failed? Again, there is no real-world motivation
>>>> cited yet, just "this is good data". Need more stories and support here.
>>>
>>> IMHO it would be very useful to see whether there are multiple
>>> small-order allocation failures or a few large-order ones, especially
>>> for CMA where large allocations are not unusual. For that I believe
>>> both alloc_pages_attempt and alloc_pages_fail would be required.
>>
>> Sorry, I meant to say "both cma_alloc_fail and alloc_pages_fail would
>> be required".
> 
> So if you want to know that, the existing items are still a little too indirect
> to really get it right. You can only know the average allocation size, by
> dividing. Instead, we should provide the allocation size, for each count.
> 
> The limited interface makes this a little awkward, but using zones/ranges could
> work: "for this range of allocation sizes, there were the following stats". Or,
> some other technique that I haven't thought of (maybe two items per file?) would
> be better.
> 
> On the other hand, there's an argument for keeping this minimal and simple. That
> would probably lead us to putting in a couple of items into /proc/vmstat, as I
> just mentioned in my other response, and calling it good.

...and remember: if we keep it nice and minimal and clean, we can put it into
/proc/vmstat and monitor it.

And then if a problem shows up, the more complex and advanced debugging data can
go into debugfs's CMA area. And you're all set.

If Android made up some policy not to use debugfs, then:

a) that probably won't prevent engineers from using it anyway, for advanced debugging,
and

b) If (a) somehow falls short, then we need to talk about what Android's plans are to
fill the need. And "fill up sysfs with debugfs items, possibly duplicating some of them,
and generally making an unecessary mess, to compensate for not using debugfs" is not
my first choice. :)


thanks,
-- 
John Hubbard
NVIDIA


  reply	other threads:[~2021-02-05  0:34 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-03 15:50 [PATCH] mm: cma: support sysfs Minchan Kim
2021-02-04  8:50 ` John Hubbard
2021-02-04 20:07   ` Minchan Kim
2021-02-04 23:14     ` John Hubbard
2021-02-04 23:43       ` Suren Baghdasaryan
2021-02-04 23:45         ` Suren Baghdasaryan
2021-02-05  0:25           ` John Hubbard
2021-02-05  0:34             ` John Hubbard [this message]
2021-02-05  1:44               ` Suren Baghdasaryan
2021-02-05  0:12       ` Minchan Kim
2021-02-05  0:24         ` John Hubbard
2021-02-05  1:44           ` Minchan Kim
2021-02-05  2:39             ` Suren Baghdasaryan
2021-02-05  2:52             ` John Hubbard
2021-02-05  5:17               ` Minchan Kim
2021-02-05  5:49                 ` John Hubbard
2021-02-05  6:24                   ` Minchan Kim
2021-02-05  6:41                     ` John Hubbard
2021-02-05 16:15                       ` Minchan Kim
2021-02-05 20:25                         ` John Hubbard
2021-02-05 21:28                           ` Minchan Kim
2021-02-05 21:52                             ` Suren Baghdasaryan
2021-02-05 21:58                               ` John Hubbard
2021-02-05 22:47                                 ` Minchan Kim
2021-02-06 17:08                                   ` Pintu Agarwal
2021-02-08  8:39                                     ` John Hubbard
2021-02-05 21:57                             ` John Hubbard
2021-02-05  2:55 ` Matthew Wilcox
2021-02-05  5:22   ` Minchan Kim
2021-02-05 12:12     ` Matthew Wilcox
2021-02-05 16:16       ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=96bc11de-fe47-c7d3-6e61-5a5a5b6d2f4c@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=joaodias@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=surenb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).