linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: Minchan Kim <minchan@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	<gregkh@linuxfoundation.org>, <surenb@google.com>,
	<joaodias@google.com>, LKML <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>
Subject: Re: [PATCH] mm: cma: support sysfs
Date: Thu, 4 Feb 2021 18:52:01 -0800	[thread overview]
Message-ID: <71c4ce84-8be7-49e2-90bd-348762b320b4@nvidia.com> (raw)
In-Reply-To: <YByi/gdaGJeV/+8b@google.com>

On 2/4/21 5:44 PM, Minchan Kim wrote:
> On Thu, Feb 04, 2021 at 04:24:20PM -0800, John Hubbard wrote:
>> On 2/4/21 4:12 PM, Minchan Kim wrote:
>> ...
>>>>> Then, how to know how often CMA API failed?
>>>>
>>>> Why would you even need to know that, *in addition* to knowing specific
>>>> page allocation numbers that failed? Again, there is no real-world motivation
>>>> cited yet, just "this is good data". Need more stories and support here.
>>>
>>> Let me give an example.
>>>
>>> Let' assume we use memory buffer allocation via CMA for bluetooth
>>> enable of  device.
>>> If user clicks the bluetooth button in the phone but fail to allocate
>>> the memory from CMA, user will still see bluetooth button gray.
>>> User would think his touch was not enough powerful so he try clicking
>>> again and fortunately CMA allocation was successful this time and
>>> they will see bluetooh button enabled and could listen the music.
>>>
>>> Here, product team needs to monitor how often CMA alloc failed so
>>> if the failure ratio is steadily increased than the bar,
>>> it means engineers need to go investigation.
>>>
>>> Make sense?
>>>
>>
>> Yes, except that it raises more questions:
>>
>> 1) Isn't this just standard allocation failure? Don't you already have a way
>> to track that?
>>
>> Presumably, having the source code, you can easily deduce that a bluetooth
>> allocation failure goes directly to a CMA allocation failure, right?

Still wondering about this...

>>
>> Anyway, even though the above is still a little murky, I expect you're right
>> that it's good to have *some* indication, somewhere about CMA behavior...
>>
>> Thinking about this some more, I wonder if this is really /proc/vmstat sort
>> of data that we're talking about. It seems to fit right in there, yes?
> 
> Thing is CMA instance are multiple, cma-A, cma-B, cma-C and each of CMA
> heap has own specific scenario. /proc/vmstat could be bloated a lot
> while CMA instance will be increased.
> 

Yes, that would not fit in /proc/vmstat...assuming that you really require
knowing--at this point--which CMA heap is involved. And that's worth poking
at. If you get an overall indication in vmstat that CMA is having trouble,
then maybe that's all you need to start digging further.

It's actually easier to monitor one or two simpler items than it is to monitor
a larger number of complicated items. And I get the impression that this is
sort of a top-level, production software indicator.

thanks,
-- 
John Hubbard
NVIDIA

  parent reply	other threads:[~2021-02-05  2:52 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-03 15:50 [PATCH] mm: cma: support sysfs Minchan Kim
2021-02-04  8:50 ` John Hubbard
2021-02-04 20:07   ` Minchan Kim
2021-02-04 23:14     ` John Hubbard
2021-02-04 23:43       ` Suren Baghdasaryan
2021-02-04 23:45         ` Suren Baghdasaryan
2021-02-05  0:25           ` John Hubbard
2021-02-05  0:34             ` John Hubbard
2021-02-05  1:44               ` Suren Baghdasaryan
2021-02-05  0:12       ` Minchan Kim
2021-02-05  0:24         ` John Hubbard
2021-02-05  1:44           ` Minchan Kim
2021-02-05  2:39             ` Suren Baghdasaryan
2021-02-05  2:52             ` John Hubbard [this message]
2021-02-05  5:17               ` Minchan Kim
2021-02-05  5:49                 ` John Hubbard
2021-02-05  6:24                   ` Minchan Kim
2021-02-05  6:41                     ` John Hubbard
2021-02-05 16:15                       ` Minchan Kim
2021-02-05 20:25                         ` John Hubbard
2021-02-05 21:28                           ` Minchan Kim
2021-02-05 21:52                             ` Suren Baghdasaryan
2021-02-05 21:58                               ` John Hubbard
2021-02-05 22:47                                 ` Minchan Kim
2021-02-06 17:08                                   ` Pintu Agarwal
2021-02-08  8:39                                     ` John Hubbard
2021-02-05 21:57                             ` John Hubbard
2021-02-05  2:55 ` Matthew Wilcox
2021-02-05  5:22   ` Minchan Kim
2021-02-05 12:12     ` Matthew Wilcox
2021-02-05 16:16       ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=71c4ce84-8be7-49e2-90bd-348762b320b4@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=joaodias@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=surenb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).