All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wei Yang <richard.weiyang@linux.alibaba.com>
To: mike.kravetz@oracle.com, akpm@linux-foundation.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Wei Yang <richard.weiyang@linux.alibaba.com>
Subject: [PATCH 04/10] mm/hugetlb: count file_region to be added when regions_needed != NULL
Date: Fri,  7 Aug 2020 17:12:45 +0800	[thread overview]
Message-ID: <20200807091251.12129-5-richard.weiyang@linux.alibaba.com> (raw)
In-Reply-To: <20200807091251.12129-1-richard.weiyang@linux.alibaba.com>

There are only two cases of function add_reservation_in_range()

    * count file_region and return the number in regions_needed
    * do the real list operation without counting

This means it is not necessary to have two parameters to classify these
two cases.

Just use regions_needed to separate them.

Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
---
 mm/hugetlb.c | 33 +++++++++++++++++----------------
 1 file changed, 17 insertions(+), 16 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 929256c130f9..d775e514eb2e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -321,16 +321,17 @@ static void coalesce_file_region(struct resv_map *resv, struct file_region *rg)
 	}
 }
 
-/* Must be called with resv->lock held. Calling this with count_only == true
- * will count the number of pages to be added but will not modify the linked
- * list. If regions_needed != NULL and count_only == true, then regions_needed
- * will indicate the number of file_regions needed in the cache to carry out to
- * add the regions for this range.
+/*
+ * Must be called with resv->lock held.
+ *
+ * Calling this with regions_needed != NULL will count the number of pages
+ * to be added but will not modify the linked list. And regions_needed will
+ * indicate the number of file_regions needed in the cache to carry out to add
+ * the regions for this range.
  */
 static long add_reservation_in_range(struct resv_map *resv, long f, long t,
 				     struct hugetlb_cgroup *h_cg,
-				     struct hstate *h, long *regions_needed,
-				     bool count_only)
+				     struct hstate *h, long *regions_needed)
 {
 	long add = 0;
 	struct list_head *head = &resv->regions;
@@ -366,14 +367,14 @@ static long add_reservation_in_range(struct resv_map *resv, long f, long t,
 		 */
 		if (rg->from > last_accounted_offset) {
 			add += rg->from - last_accounted_offset;
-			if (!count_only) {
+			if (!regions_needed) {
 				nrg = get_file_region_entry_from_cache(
 					resv, last_accounted_offset, rg->from);
 				record_hugetlb_cgroup_uncharge_info(h_cg, h,
 								    resv, nrg);
 				list_add(&nrg->link, rg->link.prev);
 				coalesce_file_region(resv, nrg);
-			} else if (regions_needed)
+			} else
 				*regions_needed += 1;
 		}
 
@@ -385,13 +386,13 @@ static long add_reservation_in_range(struct resv_map *resv, long f, long t,
 	 */
 	if (last_accounted_offset < t) {
 		add += t - last_accounted_offset;
-		if (!count_only) {
+		if (!regions_needed) {
 			nrg = get_file_region_entry_from_cache(
 				resv, last_accounted_offset, t);
 			record_hugetlb_cgroup_uncharge_info(h_cg, h, resv, nrg);
 			list_add(&nrg->link, rg->link.prev);
 			coalesce_file_region(resv, nrg);
-		} else if (regions_needed)
+		} else
 			*regions_needed += 1;
 	}
 
@@ -484,8 +485,8 @@ static long region_add(struct resv_map *resv, long f, long t,
 retry:
 
 	/* Count how many regions are actually needed to execute this add. */
-	add_reservation_in_range(resv, f, t, NULL, NULL, &actual_regions_needed,
-				 true);
+	add_reservation_in_range(resv, f, t, NULL, NULL,
+				 &actual_regions_needed);
 
 	/*
 	 * Check for sufficient descriptors in the cache to accommodate
@@ -513,7 +514,7 @@ static long region_add(struct resv_map *resv, long f, long t,
 		goto retry;
 	}
 
-	add = add_reservation_in_range(resv, f, t, h_cg, h, NULL, false);
+	add = add_reservation_in_range(resv, f, t, h_cg, h, NULL);
 
 	resv->adds_in_progress -= in_regions_needed;
 
@@ -549,9 +550,9 @@ static long region_chg(struct resv_map *resv, long f, long t,
 
 	spin_lock(&resv->lock);
 
-	/* Count how many hugepages in this range are NOT respresented. */
+	/* Count how many hugepages in this range are NOT represented. */
 	chg = add_reservation_in_range(resv, f, t, NULL, NULL,
-				       out_regions_needed, true);
+				       out_regions_needed);
 
 	if (*out_regions_needed == 0)
 		*out_regions_needed = 1;
-- 
2.20.1 (Apple Git-117)


  parent reply	other threads:[~2020-08-07  9:13 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-07  9:12 [PATCH 00/10] mm/hugetlb: code refine and simplification Wei Yang
2020-08-07  9:12 ` [PATCH 01/10] mm/hugetlb: not necessary to coalesce regions recursively Wei Yang
2020-08-07 12:47   ` Baoquan He
2020-08-10 20:22   ` Mike Kravetz
2020-08-07  9:12 ` [PATCH 02/10] mm/hugetlb: make sure to get NULL when list is empty Wei Yang
2020-08-07 12:49   ` Baoquan He
2020-08-07 14:28     ` Wei Yang
2020-08-10  0:57       ` Baoquan He
2020-08-10 20:28       ` Mike Kravetz
2020-08-10 23:05         ` Wei Yang
2020-08-07  9:12 ` [PATCH 03/10] mm/hugetlb: use list_splice to merge two list at once Wei Yang
2020-08-07 12:53   ` Baoquan He
2020-08-10 21:07   ` Mike Kravetz
2020-08-07  9:12 ` Wei Yang [this message]
2020-08-07 12:54   ` [PATCH 04/10] mm/hugetlb: count file_region to be added when regions_needed != NULL Baoquan He
2020-08-10 21:46   ` Mike Kravetz
2020-08-07  9:12 ` [PATCH 05/10] mm/hugetlb: remove the redundant check on non_swap_entry() Wei Yang
2020-08-07 12:55   ` Baoquan He
2020-08-07 14:28     ` Wei Yang
2020-08-07  9:12 ` [PATCH 06/10] mm/hugetlb: remove redundant huge_pte_alloc() in hugetlb_fault() Wei Yang
2020-08-07 12:59   ` Baoquan He
2020-08-10 22:00   ` Mike Kravetz
2020-08-07  9:12 ` [PATCH 07/10] mm/hugetlb: a page from buddy is not on any list Wei Yang
2020-08-07 13:06   ` Baoquan He
2020-08-10 22:25   ` Mike Kravetz
2020-08-07  9:12 ` [PATCH 08/10] mm/hugetlb: return non-isolated page in the loop instead of break and check Wei Yang
2020-08-07 13:09   ` Baoquan He
2020-08-07 14:32     ` Wei Yang
2020-08-10 22:55   ` Mike Kravetz
2020-08-07  9:12 ` [PATCH 09/10] mm/hugetlb: narrow the hugetlb_lock protection area during preparing huge page Wei Yang
2020-08-07 13:12   ` Baoquan He
2020-08-10 23:02   ` Mike Kravetz
2020-08-07  9:12 ` [PATCH 10/10] mm/hugetlb: not necessary to abuse temporary page to workaround the nasty free_huge_page Wei Yang
2020-08-10  2:17   ` Baoquan He
2020-08-11  0:19     ` Mike Kravetz
2020-08-11  1:51       ` Baoquan He
2020-08-11  6:54         ` Michal Hocko
2020-08-11 21:43           ` Mike Kravetz
2020-08-11 23:19             ` Wei Yang
2020-08-11 23:25               ` Mike Kravetz
2020-08-12  5:40             ` Baoquan He
2020-08-13 11:46             ` Michal Hocko
2020-08-17  3:04               ` Wei Yang
2020-08-11 23:55           ` Baoquan He
2020-08-07 22:25 ` [PATCH 00/10] mm/hugetlb: code refine and simplification Mike Kravetz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200807091251.12129-5-richard.weiyang@linux.alibaba.com \
    --to=richard.weiyang@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.