linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Suren Baghdasaryan <surenb@google.com>
To: Minchan Kim <minchan@kernel.org>
Cc: John Hubbard <jhubbard@nvidia.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	John Dias <joaodias@google.com>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>
Subject: Re: [PATCH] mm: cma: support sysfs
Date: Fri, 5 Feb 2021 13:52:00 -0800	[thread overview]
Message-ID: <CAJuCfpEaQqgsyGtzRvovpuLOELR0iRNvNF0rnih1bq0HQCTuww@mail.gmail.com> (raw)
In-Reply-To: <YB24YXMJOjwokDb5@google.com>

On Fri, Feb 5, 2021 at 1:28 PM Minchan Kim <minchan@kernel.org> wrote:
>
> On Fri, Feb 05, 2021 at 12:25:52PM -0800, John Hubbard wrote:
> > On 2/5/21 8:15 AM, Minchan Kim wrote:
> > ...
> > > > Yes, approximately. I was wondering if this would suffice at least as a baseline:
> > > >
> > > > cma_alloc_success   125
> > > > cma_alloc_failure   25
> > >
> > > IMO, regardless of the my patch, it would be good to have such statistics
> > > in that CMA was born to replace carved out memory with dynamic allocation
> > > ideally for memory efficiency ideally so failure should regard critical
> > > so admin could notice it how the system is hurt.
> >
> > Right. So CMA failures are useful for the admin to see, understood.
> >
> > >
> > > Anyway, it's not enough for me and orthgonal with my goal.
> > >
> >
> > OK. But...what *is* your goal, and why is this useless (that's what
> > orthogonal really means here) for your goal?
>
> As I mentioned, the goal is to monitor the failure from each of CMA
> since they have each own purpose.
>
> Let's have an example.
>
> System has 5 CMA area and each CMA is associated with each
> user scenario. They have exclusive CMA area to avoid
> fragmentation problem.
>
> CMA-1 depends on bluetooh
> CMA-2 depends on WIFI
> CMA-3 depends on sensor-A
> CMA-4 depends on sensor-B
> CMA-5 depends on sensor-C
>
> With this, we could catch which module was affected but with global failure,
> I couldn't find who was affected.
>
> >
> > Also, would you be willing to try out something simple first,
> > such as providing indication that cma is active and it's overall success
> > rate, like this:
> >
> > /proc/vmstat:
> >
> > cma_alloc_success   125
> > cma_alloc_failure   25
> >
> > ...or is the only way to provide the more detailed items, complete with
> > per-CMA details, in a non-debugfs location?
> >
> >
> > > >
> > > > ...and then, to see if more is needed, some questions:
> > > >
> > > > a)  Do you know of an upper bound on how many cma areas there can be
> > > > (I think Matthew also asked that)?
> > >
> > > There is no upper bound since it's configurable.
> > >
> >
> > OK, thanks,so that pretty much rules out putting per-cma details into
> > anything other than a directory or something like it.
> >
> > > >
> > > > b) Is tracking the cma area really as valuable as other possibilities? We can put
> > > > "a few" to "several" items here, so really want to get your very favorite bits of
> > > > information in. If, for example, there can be *lots* of cma areas, then maybe tracking
> > >
> > > At this moment, allocation/failure for each CMA area since they have
> > > particular own usecase, which makes me easy to keep which module will
> > > be affected. I think it is very useful per-CMA statistics as minimum
> > > code change so I want to enable it by default under CONFIG_CMA && CONFIG_SYSFS.
> > >
> > > > by a range of allocation sizes is better...
> > >
> > > I takes your suggestion something like this.
> > >
> > > [alloc_range] could be order or range by interval
> > >
> > > /sys/kernel/mm/cma/cma-A/[alloc_range]/success
> > > /sys/kernel/mm/cma/cma-A/[alloc_range]/fail
> > > ..
> > > ..
> > > /sys/kernel/mm/cma/cma-Z/[alloc_range]/success
> > > /sys/kernel/mm/cma/cma-Z/[alloc_range]/fail

The interface above seems to me the most useful actually, if by
[alloc_range] you mean the different allocation orders. This would
cover Minchan's per-CMA failure tracking and would also allow us to
understand what kind of allocations are failing and therefore if the
problem is caused by pinning/fragmentation or by over-utilization.

> >
> > Actually, I meant, "ranges instead of cma areas", like this:
> >
> > /<path-to-cma-data/[alloc_range_1]/success
> > /<path-to-cma-data/[alloc_range_1]/fail
> > /<path-to-cma-data/[alloc_range_2]/success
> > /<path-to-cma-data/[alloc_range_2]/fail
> > ...
> > /<path-to-cma-data/[alloc_range_max]/success
> > /<path-to-cma-data/[alloc_range_max]/fail
> >
> > The idea is that knowing the allocation sizes that succeeded
> > and failed is maybe even more interesting and useful than
> > knowing the cma area that contains them.
>
> Understand your point but it would make hard to find who was
> affected by the failure. That's why I suggested to have your
> suggestion under additional config since per-cma metric with
> simple sucess/failure are enough.
>
> >
> > >
> > > I agree it would be also useful but I'd like to enable it under
> > > CONFIG_CMA_SYSFS_ALLOC_RANGE as separate patchset.
> > >
> >
> > I will stop harassing you very soon, just want to bottom out on
> > understanding the real goals first. :)
> >
>
> I hope my example makes the goal more clear for you.

  reply	other threads:[~2021-02-05 21:54 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-03 15:50 [PATCH] mm: cma: support sysfs Minchan Kim
2021-02-04  8:50 ` John Hubbard
2021-02-04 20:07   ` Minchan Kim
2021-02-04 23:14     ` John Hubbard
2021-02-04 23:43       ` Suren Baghdasaryan
2021-02-04 23:45         ` Suren Baghdasaryan
2021-02-05  0:25           ` John Hubbard
2021-02-05  0:34             ` John Hubbard
2021-02-05  1:44               ` Suren Baghdasaryan
2021-02-05  0:12       ` Minchan Kim
2021-02-05  0:24         ` John Hubbard
2021-02-05  1:44           ` Minchan Kim
2021-02-05  2:39             ` Suren Baghdasaryan
2021-02-05  2:52             ` John Hubbard
2021-02-05  5:17               ` Minchan Kim
2021-02-05  5:49                 ` John Hubbard
2021-02-05  6:24                   ` Minchan Kim
2021-02-05  6:41                     ` John Hubbard
2021-02-05 16:15                       ` Minchan Kim
2021-02-05 20:25                         ` John Hubbard
2021-02-05 21:28                           ` Minchan Kim
2021-02-05 21:52                             ` Suren Baghdasaryan [this message]
2021-02-05 21:58                               ` John Hubbard
2021-02-05 22:47                                 ` Minchan Kim
2021-02-06 17:08                                   ` Pintu Agarwal
2021-02-08  8:39                                     ` John Hubbard
2021-02-05 21:57                             ` John Hubbard
2021-02-05  2:55 ` Matthew Wilcox
2021-02-05  5:22   ` Minchan Kim
2021-02-05 12:12     ` Matthew Wilcox
2021-02-05 16:16       ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJuCfpEaQqgsyGtzRvovpuLOELR0iRNvNF0rnih1bq0HQCTuww@mail.gmail.com \
    --to=surenb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=jhubbard@nvidia.com \
    --cc=joaodias@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).