From: Johannes Weiner <hannes@cmpxchg.org>
To: Tejun Heo <tj@kernel.org>
Cc: Odin Ugedal <odin@uged.al>,
corbet@lwn.net, cgroups@vger.kernel.org,
linux-doc@vger.kernel.org, Michal Hocko <mhocko@kernel.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>
Subject: Re: [RFC] docs/admin-guide/cgroup-v2: Add hugetlb rsvd files
Date: Mon, 19 Apr 2021 10:51:32 -0400 [thread overview]
Message-ID: <YH2Y9FucBW2GLLLQ@cmpxchg.org> (raw)
In-Reply-To: <YHn3cifQv1FUOqfU@slm.duckdns.org>
On Fri, Apr 16, 2021 at 04:45:38PM -0400, Tejun Heo wrote:
> (cc'ing memcg maintainers)
>
> On Fri, Apr 16, 2021 at 04:11:46PM +0200, Odin Ugedal wrote:
> > Add missing docs about reservation accounting for hugetlb in cgroup v2.
> >
> > Signed-off-by: Odin Ugedal <odin@uged.al>
> > ---
> > RFC: This is linking from cgroup-v1 docs, and that is probably not
> > optimal. The information about the difference between reservation
> > accounting and page fault accounting is pretty hard to make short.
> >
> > I think we have four ways to do it, but I don't know what is
> > most optimal:
> >
> > - Link from cgroup-v2 to cgroup-v1 (this patch)
> > - Have a separate description for both v1 and v2
> > - Move description from cgroup-v1 to cgroup-v2, and link from v1 to
> > v2.
>
> This would be my preference but I don't really mind the other way around
> that much.
v1/hugetlb.rst is quite verbose, and some things are implementation
details. I'm not sure we want all that in the cgroup2 documentation.
My preference would be to first try to write a version of the doc in
cgroup2's briefer style, and then, depending on how that works out,
see whether we can delete (replace with link) the cgroup1 text, or
keep it for archiving purposes.
v1/hugetlb doc items that seem unnecesary to keep in v2:
- how to mount the cgroupfs, create cgroups, and move tasks into it
- the page fault accounting description could be compressed a
bit. maybe drop the part about it being the admin's job to avoid
sigbus by being careful with the allocations. that's obvious imo
when you simply describe the sigbus semantics.
- likewise, reservation accounting can be briefer too. there is quite
a bit of opinion in there that could probably be cut short. maybe a
one-liner that says "mmap-time accounting gives userspace easier
error handling - if in doubt, use reservation accounting" or so.
- caveats with shared memory: not sure this is needed at all, but if
so, it can be a one liner saying "hugetlb uses the same first-hit
semantics as the memory controller (see Memory Ownership)"
- Caveats with HugeTLB cgroup offline: this is an implementation
detail that i don't think is actionable information for users
next prev parent reply other threads:[~2021-04-19 14:51 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-16 14:11 [RFC] docs/admin-guide/cgroup-v2: Add hugetlb rsvd files Odin Ugedal
2021-04-16 20:45 ` Tejun Heo
2021-04-19 14:51 ` Johannes Weiner [this message]
2021-04-25 9:22 ` Odin Ugedal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YH2Y9FucBW2GLLLQ@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=cgroups@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=linux-doc@vger.kernel.org \
--cc=mhocko@kernel.org \
--cc=odin@uged.al \
--cc=tj@kernel.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).