linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosryahmed@google.com>
To: Shakeel Butt <shakeelb@google.com>
Cc: "Andrew Morton" <akpm@linux-foundation.org>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Michal Hocko" <mhocko@kernel.org>,
	"Roman Gushchin" <roman.gushchin@linux.dev>,
	"Muchun Song" <muchun.song@linux.dev>,
	"Ivan Babrou" <ivan@cloudflare.com>, "Tejun Heo" <tj@kernel.org>,
	"Michal Koutný" <mkoutny@suse.com>,
	"Waiman Long" <longman@redhat.com>,
	kernel-team@cloudflare.com, "Wei Xu" <weixugc@google.com>,
	"Greg Thelen" <gthelen@google.com>,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 3/5] mm: memcg: make stats flushing threshold per-memcg
Date: Thu, 12 Oct 2023 14:19:55 -0700	[thread overview]
Message-ID: <CAJD7tkY9LrWHX3rjYwNnVK9sjtYPJyx6j_Y3DexTXfS9wwr+xA@mail.gmail.com> (raw)
In-Reply-To: <CALvZod5fWDWZDa=WoyOyckvx5ptjmFBMO9sOG0Sk0MgiDX4DSQ@mail.gmail.com>

On Thu, Oct 12, 2023 at 2:16 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Thu, Oct 12, 2023 at 2:06 PM Yosry Ahmed <yosryahmed@google.com> wrote:
> >
> > [..]
> > > > > >
> > > > > > Using next-20231009 and a similar 44 core machine with hyperthreading
> > > > > > disabled, I ran 22 instances of netperf in parallel and got the
> > > > > > following numbers from averaging 20 runs:
> > > > > >
> > > > > > Base: 33076.5 mbps
> > > > > > Patched: 31410.1 mbps
> > > > > >
> > > > > > That's about 5% diff. I guess the number of iterations helps reduce
> > > > > > the noise? I am not sure.
> > > > > >
> > > > > > Please also keep in mind that in this case all netperf instances are
> > > > > > in the same cgroup and at a 4-level depth. I imagine in a practical
> > > > > > setup processes would be a little more spread out, which means less
> > > > > > common ancestors, so less contended atomic operations.
> > > > >
> > > > >
> > > > > (Resending the reply as I messed up the last one, was not in plain text)
> > > > >
> > > > > I was curious, so I ran the same testing in a cgroup 2 levels deep
> > > > > (i.e /sys/fs/cgroup/a/b), which is a much more common setup in my
> > > > > experience. Here are the numbers:
> > > > >
> > > > > Base: 40198.0 mbps
> > > > > Patched: 38629.7 mbps
> > > > >
> > > > > The regression is reduced to ~3.9%.
> > > > >
> > > > > What's more interesting is that going from a level 2 cgroup to a level
> > > > > 4 cgroup is already a big hit with or without this patch:
> > > > >
> > > > > Base: 40198.0 -> 33076.5 mbps (~17.7% regression)
> > > > > Patched: 38629.7 -> 31410.1 (~18.7% regression)
> > > > >
> > > > > So going from level 2 to 4 is already a significant regression for
> > > > > other reasons (e.g. hierarchical charging). This patch only makes it
> > > > > marginally worse. This puts the numbers more into perspective imo than
> > > > > comparing values at level 4. What do you think?
> > > >
> > > > This is weird as we are running the experiments on the same machine. I
> > > > will rerun with 2 levels as well. Also can you rerun the page fault
> > > > benchmark as well which was showing 9% regression in your original
> > > > commit message?
> > >
> > > Thanks. I will re-run the page_fault tests, but keep in mind that the
> > > page fault benchmarks in will-it-scale are highly variable. We run
> > > them between kernel versions internally, and I think we ignore any
> > > changes below 10% as the benchmark is naturally noisy.
> > >
> > > I have a couple of runs for page_fault3_scalability showing a 2-3%
> > > improvement with this patch :)
> >
> > I ran the page_fault tests for 10 runs on a machine with 256 cpus in a
> > level 2 cgroup, here are the results (the results in the original
> > commit message are for 384 cpus in a level 4 cgroup):
> >
> >                LABEL            |     MEAN    |   MEDIAN    |   STDDEV   |
> > ------------------------------+-------------+-------------+-------------
> >   page_fault1_per_process_ops |             |             |            |
> >   (A) base                    | 270249.164  | 265437.000  | 13451.836  |
> >   (B) patched                 | 261368.709  | 255725.000  | 13394.767  |
> >                               | -3.29%      | -3.66%      |            |
> >   page_fault1_per_thread_ops  |             |             |            |
> >   (A) base                    | 242111.345  | 239737.000  | 10026.031  |
> >   (B) patched                 | 237057.109  | 235305.000  | 9769.687   |
> >                               | -2.09%      | -1.85%      |            |
> >   page_fault1_scalability     |             |             |
> >   (A) base                    | 0.034387    | 0.035168    | 0.0018283  |
> >   (B) patched                 | 0.033988    | 0.034573    | 0.0018056  |
> >                               | -1.16%      | -1.69%      |            |
> >   page_fault2_per_process_ops |             |             |
> >   (A) base                    | 203561.836  | 203301.000  | 2550.764   |
> >   (B) patched                 | 197195.945  | 197746.000  | 2264.263   |
> >                               | -3.13%      | -2.73%      |            |
> >   page_fault2_per_thread_ops  |             |             |
> >   (A) base                    | 171046.473  | 170776.000  | 1509.679   |
> >   (B) patched                 | 166626.327  | 166406.000  | 768.753    |
> >                               | -2.58%      | -2.56%      |            |
> >   page_fault2_scalability     |             |             |
> >   (A) base                    | 0.054026    | 0.053821    | 0.00062121 |
> >   (B) patched                 | 0.053329    | 0.05306     | 0.00048394 |
> >                               | -1.29%      | -1.41%      |            |
> >   page_fault3_per_process_ops |             |             |
> >   (A) base                    | 1295807.782 | 1297550.000 | 5907.585   |
> >   (B) patched                 | 1275579.873 | 1273359.000 | 8759.160   |
> >                               | -1.56%      | -1.86%      |            |
> >   page_fault3_per_thread_ops  |             |             |
> >   (A) base                    | 391234.164  | 390860.000  | 1760.720   |
> >   (B) patched                 | 377231.273  | 376369.000  | 1874.971   |
> >                               | -3.58%      | -3.71%      |            |
> >   page_fault3_scalability     |             |             |
> >   (A) base                    | 0.60369     | 0.60072     | 0.0083029  |
> >   (B) patched                 | 0.61733     | 0.61544     | 0.009855   |
> >                               | +2.26%      | +2.45%      |            |
> >
> > The numbers are much better. I can modify the commit log to include
> > the testing in the replies instead of what's currently there if this
> > helps (22 netperf instances on 44 cpus and will-it-scale page_fault on
> > 256 cpus -- all in a level 2 cgroup).
>
> Yes this looks better. I think we should also ask intel perf and
> phoronix folks to run their benchmarks as well (but no need to block
> on them).

Anything I need to do for this to happen? (I thought such testing is
already done on linux-next)

Also, any further comments on the patch (or the series in general)? If
not, I can send a new commit message for this patch in-place.


  reply	other threads:[~2023-10-12 21:20 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-10  3:21 [PATCH v2 0/5] mm: memcg: subtree stats flushing and thresholds Yosry Ahmed
2023-10-10  3:21 ` [PATCH v2 1/5] mm: memcg: change flush_next_time to flush_last_time Yosry Ahmed
2023-10-10  3:21 ` [PATCH v2 2/5] mm: memcg: move vmstats structs definition above flushing code Yosry Ahmed
2023-10-10  3:21 ` [PATCH v2 3/5] mm: memcg: make stats flushing threshold per-memcg Yosry Ahmed
2023-10-10  3:24   ` Yosry Ahmed
2023-10-10 20:45   ` Shakeel Butt
2023-10-10 21:02     ` Yosry Ahmed
2023-10-10 22:21       ` Yosry Ahmed
2023-10-11  0:36         ` Shakeel Butt
2023-10-11  1:48           ` Yosry Ahmed
2023-10-11 12:45             ` Shakeel Butt
2023-10-12  3:13               ` Yosry Ahmed
2023-10-12  8:01                 ` Yosry Ahmed
2023-10-12  8:04                 ` Yosry Ahmed
2023-10-12 13:29                   ` Johannes Weiner
2023-10-12 23:28                     ` Yosry Ahmed
2023-10-13  2:33                       ` Johannes Weiner
2023-10-13  2:38                         ` Yosry Ahmed
2023-10-12 13:35                   ` Shakeel Butt
2023-10-12 15:10                     ` Yosry Ahmed
2023-10-12 21:05                       ` Yosry Ahmed
2023-10-12 21:16                         ` Shakeel Butt
2023-10-12 21:19                           ` Yosry Ahmed [this message]
2023-10-12 21:38                             ` Shakeel Butt
2023-10-12 22:23                               ` Yosry Ahmed
2023-10-14 23:08                                 ` Andrew Morton
2023-10-16 18:42                                   ` Yosry Ahmed
2023-10-17 23:52                                   ` Yosry Ahmed
2023-10-18  8:22                                 ` Oliver Sang
2023-10-18  8:54                                   ` Yosry Ahmed
2023-10-20 16:17   ` kernel test robot
2023-10-20 17:23     ` Shakeel Butt
2023-10-20 17:42       ` Yosry Ahmed
2023-10-23  1:25         ` Feng Tang
2023-10-23 18:25           ` Yosry Ahmed
2023-10-24  2:13             ` Yosry Ahmed
2023-10-24  6:56               ` Oliver Sang
2023-10-24  7:14                 ` Yosry Ahmed
2023-10-25  6:09                   ` Oliver Sang
2023-10-25  6:22                     ` Yosry Ahmed
2023-10-25 17:06                       ` Shakeel Butt
2023-10-25 18:36                         ` Yosry Ahmed
2023-10-10  3:21 ` [PATCH v2 4/5] mm: workingset: move the stats flush into workingset_test_recent() Yosry Ahmed
2023-10-10  3:21 ` [PATCH v2 5/5] mm: memcg: restore subtree stats flushing Yosry Ahmed
2023-10-10 16:48 ` [PATCH v2 0/5] mm: memcg: subtree stats flushing and thresholds domenico cerasuolo
2023-10-10 19:01   ` Yosry Ahmed
2023-10-18 21:12 ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJD7tkY9LrWHX3rjYwNnVK9sjtYPJyx6j_Y3DexTXfS9wwr+xA@mail.gmail.com \
    --to=yosryahmed@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=ivan@cloudflare.com \
    --cc=kernel-team@cloudflare.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=mhocko@kernel.org \
    --cc=mkoutny@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    --cc=tj@kernel.org \
    --cc=weixugc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).