From: Mike Kravetz <mike.kravetz@oracle.com>
To: Mina Almasry <almasrymina@google.com>
Cc: shuah@kernel.org, rientjes@google.com, shakeelb@google.com,
gthelen@google.com, akpm@linux-foundation.org,
khalid.aziz@oracle.com, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, linux-kselftest@vger.kernel.org
Subject: Re: [RFC PATCH v2 4/5] hugetlb_cgroup: Add accounting for shared mappings
Date: Tue, 13 Aug 2019 16:54:33 -0700 [thread overview]
Message-ID: <47cfc50d-bea3-0247-247e-888d2942f134@oracle.com> (raw)
In-Reply-To: <20190808231340.53601-5-almasrymina@google.com>
On 8/8/19 4:13 PM, Mina Almasry wrote:
> For shared mappings, the pointer to the hugetlb_cgroup to uncharge lives
> in the resv_map entries, in file_region->reservation_counter.
>
> When a file_region entry is added to the resv_map via region_add, we
> also charge the appropriate hugetlb_cgroup and put the pointer to that
> in file_region->reservation_counter. This is slightly delicate since we
> need to not modify the resv_map until we know that charging the
> reservation has succeeded. If charging doesn't succeed, we report the
> error to the caller, so that the kernel fails the reservation.
I wish we did not need to modify these region_() routines as they are
already difficult to understand. However, I see no other way with the
desired semantics.
> On region_del, which is when the hugetlb memory is unreserved, we delete
> the file_region entry in the resv_map, but also uncharge the
> file_region->reservation_counter.
>
> ---
> mm/hugetlb.c | 208 +++++++++++++++++++++++++++++++++++++++++----------
> 1 file changed, 170 insertions(+), 38 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 235996aef6618..d76e3137110ab 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -242,8 +242,72 @@ struct file_region {
> struct list_head link;
> long from;
> long to;
> +#ifdef CONFIG_CGROUP_HUGETLB
> + /*
> + * On shared mappings, each reserved region appears as a struct
> + * file_region in resv_map. These fields hold the info needed to
> + * uncharge each reservation.
> + */
> + struct page_counter *reservation_counter;
> + unsigned long pages_per_hpage;
> +#endif
> };
>
> +/* Must be called with resv->lock held. Calling this with dry_run == true will
> + * count the number of pages added but will not modify the linked list.
> + */
> +static long consume_regions_we_overlap_with(struct file_region *rg,
> + struct list_head *head, long f, long *t,
> + struct hugetlb_cgroup *h_cg,
> + struct hstate *h,
> + bool dry_run)
> +{
> + long add = 0;
> + struct file_region *trg = NULL, *nrg = NULL;
> +
> + /* Consume any regions we now overlap with. */
> + nrg = rg;
> + list_for_each_entry_safe(rg, trg, rg->link.prev, link) {
> + if (&rg->link == head)
> + break;
> + if (rg->from > *t)
> + break;
> +
> + /* If this area reaches higher then extend our area to
> + * include it completely. If this is not the first area
> + * which we intend to reuse, free it.
> + */
> + if (rg->to > *t)
> + *t = rg->to;
> + if (rg != nrg) {
> + /* Decrement return value by the deleted range.
> + * Another range will span this area so that by
> + * end of routine add will be >= zero
> + */
> + add -= (rg->to - rg->from);
> + if (!dry_run) {
> + list_del(&rg->link);
> + kfree(rg);
Is it possible that the region struct we are deleting pointed to
a reservation_counter? Perhaps even for another cgroup?
Just concerned with the way regions are coalesced that we may be
deleting counters.
--
Mike Kravetz
next prev parent reply other threads:[~2019-08-13 23:54 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-08 23:13 [RFC PATCH v2 0/5] hugetlb_cgroup: Add hugetlb_cgroup reservation limits Mina Almasry
2019-08-08 23:13 ` [RFC PATCH v2 1/5] hugetlb_cgroup: Add hugetlb_cgroup reservation counter Mina Almasry
2019-08-08 23:13 ` [RFC PATCH v2 2/5] hugetlb_cgroup: Add interface for charge/uncharge Mina Almasry
2019-08-08 23:13 ` [RFC PATCH v2 3/5] hugetlb_cgroup: Add reservation accounting for private mappings Mina Almasry
2019-08-08 23:13 ` [RFC PATCH v2 4/5] hugetlb_cgroup: Add accounting for shared mappings Mina Almasry
2019-08-13 23:54 ` Mike Kravetz [this message]
2019-08-14 16:46 ` Mike Kravetz
2019-08-15 23:04 ` Mina Almasry
2019-08-16 16:28 ` Mike Kravetz
2019-08-16 18:06 ` Mina Almasry
2019-08-15 23:08 ` Mina Almasry
2019-08-16 16:33 ` Mike Kravetz
2019-08-08 23:13 ` [RFC PATCH v2 5/5] hugetlb_cgroup: Add hugetlb_cgroup reservation tests Mina Almasry
2019-08-09 17:54 ` [RFC PATCH v2 0/5] hugetlb_cgroup: Add hugetlb_cgroup reservation limits Mike Kravetz
2019-08-09 19:42 ` Mina Almasry
2019-08-10 18:58 ` Mike Kravetz
2019-08-10 22:01 ` Mina Almasry
2019-08-13 23:40 ` Mike Kravetz
[not found] ` <20190815035352.14952-1-hdanton@sina.com>
2019-08-15 23:21 ` [RFC PATCH v2 1/5] hugetlb_cgroup: Add hugetlb_cgroup reservation counter Mina Almasry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=47cfc50d-bea3-0247-247e-888d2942f134@oracle.com \
--to=mike.kravetz@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=almasrymina@google.com \
--cc=gthelen@google.com \
--cc=khalid.aziz@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=rientjes@google.com \
--cc=shakeelb@google.com \
--cc=shuah@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).