linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, almasrymina@google.com,
	gthelen@google.com, linux-mm@kvack.org,
	miguel.ojeda.sandonis@gmail.com, mike.kravetz@oracle.com,
	mm-commits@vger.kernel.org, rientjes@google.com,
	sandipan@linux.ibm.com, shakeelb@google.com, shuah@kernel.org,
	torvalds@linux-foundation.org
Subject: [patch 145/155] hugetlb: disable region_add file_region coalescing
Date: Wed, 01 Apr 2020 21:11:25 -0700	[thread overview]
Message-ID: <20200402041125.IGYRnb7n6%akpm@linux-foundation.org> (raw)
In-Reply-To: <20200401210155.09e3b9742e1c6e732f5a7250@linux-foundation.org>

From: Mina Almasry <almasrymina@google.com>
Subject: hugetlb: disable region_add file_region coalescing

A follow up patch in this series adds hugetlb cgroup uncharge info the
file_region entries in resv->regions.  The cgroup uncharge info may differ
for different regions, so they can no longer be coalesced at region_add
time.  So, disable region coalescing in region_add in this patch.

Behavior change:

Say a resv_map exists like this [0->1], [2->3], and [5->6].

Then a region_chg/add call comes in region_chg/add(f=0, t=5).

Old code would generate resv->regions: [0->5], [5->6].
New code would generate resv->regions: [0->1], [1->2], [2->3], [3->5],
[5->6].

Special care needs to be taken to handle the resv->adds_in_progress
variable correctly.  In the past, only 1 region would be added for every
region_chg and region_add call.  But now, each call may add multiple
regions, so we can no longer increment adds_in_progress by 1 in
region_chg, or decrement adds_in_progress by 1 after region_add or
region_abort.  Instead, region_chg calls add_reservation_in_range() to
count the number of regions needed and allocates those, and that info is
passed to region_add and region_abort to decrement adds_in_progress
correctly.

We've also modified the assumption that region_add after region_chg never
fails.  region_chg now pre-allocates at least 1 region for region_add.  If
region_add needs more regions than region_chg has allocated for it, then
it may fail.

[almasrymina@google.com: fix file_region entry allocations]
  Link: http://lkml.kernel.org/r/20200219012736.20363-1-almasrymina@google.com
Link: http://lkml.kernel.org/r/20200211213128.73302-4-almasrymina@google.com
Signed-off-by: Mina Almasry <almasrymina@google.com>
Acked-by: David Rientjes <rientjes@google.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Sandipan Das <sandipan@linux.ibm.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Greg Thelen <gthelen@google.com>
Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/hugetlb.c |  338 +++++++++++++++++++++++++++++++++----------------
 1 file changed, 229 insertions(+), 109 deletions(-)

--- a/mm/hugetlb.c~hugetlb-disable-region_add-file_region-coalescing
+++ a/mm/hugetlb.c
@@ -245,107 +245,217 @@ struct file_region {
 	long to;
 };
 
+/* Helper that removes a struct file_region from the resv_map cache and returns
+ * it for use.
+ */
+static struct file_region *
+get_file_region_entry_from_cache(struct resv_map *resv, long from, long to)
+{
+	struct file_region *nrg = NULL;
+
+	VM_BUG_ON(resv->region_cache_count <= 0);
+
+	resv->region_cache_count--;
+	nrg = list_first_entry(&resv->region_cache, struct file_region, link);
+	VM_BUG_ON(!nrg);
+	list_del(&nrg->link);
+
+	nrg->from = from;
+	nrg->to = to;
+
+	return nrg;
+}
+
 /* Must be called with resv->lock held. Calling this with count_only == true
  * will count the number of pages to be added but will not modify the linked
- * list.
+ * list. If regions_needed != NULL and count_only == true, then regions_needed
+ * will indicate the number of file_regions needed in the cache to carry out to
+ * add the regions for this range.
  */
 static long add_reservation_in_range(struct resv_map *resv, long f, long t,
-				     bool count_only)
+				     long *regions_needed, bool count_only)
 {
-	long chg = 0;
+	long add = 0;
 	struct list_head *head = &resv->regions;
+	long last_accounted_offset = f;
 	struct file_region *rg = NULL, *trg = NULL, *nrg = NULL;
 
-	/* Locate the region we are before or in. */
-	list_for_each_entry(rg, head, link)
-		if (f <= rg->to)
-			break;
+	if (regions_needed)
+		*regions_needed = 0;
 
-	/* Round our left edge to the current segment if it encloses us. */
-	if (f > rg->from)
-		f = rg->from;
-
-	chg = t - f;
-
-	/* Check for and consume any regions we now overlap with. */
-	nrg = rg;
-	list_for_each_entry_safe(rg, trg, rg->link.prev, link) {
-		if (&rg->link == head)
-			break;
+	/* In this loop, we essentially handle an entry for the range
+	 * [last_accounted_offset, rg->from), at every iteration, with some
+	 * bounds checking.
+	 */
+	list_for_each_entry_safe(rg, trg, head, link) {
+		/* Skip irrelevant regions that start before our range. */
+		if (rg->from < f) {
+			/* If this region ends after the last accounted offset,
+			 * then we need to update last_accounted_offset.
+			 */
+			if (rg->to > last_accounted_offset)
+				last_accounted_offset = rg->to;
+			continue;
+		}
+
+		/* When we find a region that starts beyond our range, we've
+		 * finished.
+		 */
 		if (rg->from > t)
 			break;
 
-		/* We overlap with this area, if it extends further than
-		 * us then we must extend ourselves.  Account for its
-		 * existing reservation.
+		/* Add an entry for last_accounted_offset -> rg->from, and
+		 * update last_accounted_offset.
 		 */
-		if (rg->to > t) {
-			chg += rg->to - t;
-			t = rg->to;
+		if (rg->from > last_accounted_offset) {
+			add += rg->from - last_accounted_offset;
+			if (!count_only) {
+				nrg = get_file_region_entry_from_cache(
+					resv, last_accounted_offset, rg->from);
+				list_add(&nrg->link, rg->link.prev);
+			} else if (regions_needed)
+				*regions_needed += 1;
+		}
+
+		last_accounted_offset = rg->to;
+	}
+
+	/* Handle the case where our range extends beyond
+	 * last_accounted_offset.
+	 */
+	if (last_accounted_offset < t) {
+		add += t - last_accounted_offset;
+		if (!count_only) {
+			nrg = get_file_region_entry_from_cache(
+				resv, last_accounted_offset, t);
+			list_add(&nrg->link, rg->link.prev);
+		} else if (regions_needed)
+			*regions_needed += 1;
+	}
+
+	VM_BUG_ON(add < 0);
+	return add;
+}
+
+/* Must be called with resv->lock acquired. Will drop lock to allocate entries.
+ */
+static int allocate_file_region_entries(struct resv_map *resv,
+					int regions_needed)
+	__must_hold(&resv->lock)
+{
+	struct list_head allocated_regions;
+	int to_allocate = 0, i = 0;
+	struct file_region *trg = NULL, *rg = NULL;
+
+	VM_BUG_ON(regions_needed < 0);
+
+	INIT_LIST_HEAD(&allocated_regions);
+
+	/*
+	 * Check for sufficient descriptors in the cache to accommodate
+	 * the number of in progress add operations plus regions_needed.
+	 *
+	 * This is a while loop because when we drop the lock, some other call
+	 * to region_add or region_del may have consumed some region_entries,
+	 * so we keep looping here until we finally have enough entries for
+	 * (adds_in_progress + regions_needed).
+	 */
+	while (resv->region_cache_count <
+	       (resv->adds_in_progress + regions_needed)) {
+		to_allocate = resv->adds_in_progress + regions_needed -
+			      resv->region_cache_count;
+
+		/* At this point, we should have enough entries in the cache
+		 * for all the existings adds_in_progress. We should only be
+		 * needing to allocate for regions_needed.
+		 */
+		VM_BUG_ON(resv->region_cache_count < resv->adds_in_progress);
+
+		spin_unlock(&resv->lock);
+		for (i = 0; i < to_allocate; i++) {
+			trg = kmalloc(sizeof(*trg), GFP_KERNEL);
+			if (!trg)
+				goto out_of_memory;
+			list_add(&trg->link, &allocated_regions);
 		}
-		chg -= rg->to - rg->from;
 
-		if (!count_only && rg != nrg) {
+		spin_lock(&resv->lock);
+
+		list_for_each_entry_safe(rg, trg, &allocated_regions, link) {
 			list_del(&rg->link);
-			kfree(rg);
+			list_add(&rg->link, &resv->region_cache);
+			resv->region_cache_count++;
 		}
 	}
 
-	if (!count_only) {
-		nrg->from = f;
-		nrg->to = t;
-	}
+	return 0;
 
-	return chg;
+out_of_memory:
+	list_for_each_entry_safe(rg, trg, &allocated_regions, link) {
+		list_del(&rg->link);
+		kfree(rg);
+	}
+	return -ENOMEM;
 }
 
 /*
  * Add the huge page range represented by [f, t) to the reserve
- * map.  Existing regions will be expanded to accommodate the specified
- * range, or a region will be taken from the cache.  Sufficient regions
- * must exist in the cache due to the previous call to region_chg with
- * the same range.
+ * map.  Regions will be taken from the cache to fill in this range.
+ * Sufficient regions should exist in the cache due to the previous
+ * call to region_chg with the same range, but in some cases the cache will not
+ * have sufficient entries due to races with other code doing region_add or
+ * region_del.  The extra needed entries will be allocated.
  *
- * Return the number of new huge pages added to the map.  This
- * number is greater than or equal to zero.
+ * regions_needed is the out value provided by a previous call to region_chg.
+ *
+ * Return the number of new huge pages added to the map.  This number is greater
+ * than or equal to zero.  If file_region entries needed to be allocated for
+ * this operation and we were not able to allocate, it ruturns -ENOMEM.
+ * region_add of regions of length 1 never allocate file_regions and cannot
+ * fail; region_chg will always allocate at least 1 entry and a region_add for
+ * 1 page will only require at most 1 entry.
  */
-static long region_add(struct resv_map *resv, long f, long t)
+static long region_add(struct resv_map *resv, long f, long t,
+		       long in_regions_needed)
 {
-	struct list_head *head = &resv->regions;
-	struct file_region *rg, *nrg;
-	long add = 0;
+	long add = 0, actual_regions_needed = 0;
 
 	spin_lock(&resv->lock);
-	/* Locate the region we are either in or before. */
-	list_for_each_entry(rg, head, link)
-		if (f <= rg->to)
-			break;
+retry:
 
-	/*
-	 * If no region exists which can be expanded to include the
-	 * specified range, pull a region descriptor from the cache
-	 * and use it for this range.
-	 */
-	if (&rg->link == head || t < rg->from) {
-		VM_BUG_ON(resv->region_cache_count <= 0);
+	/* Count how many regions are actually needed to execute this add. */
+	add_reservation_in_range(resv, f, t, &actual_regions_needed, true);
 
-		resv->region_cache_count--;
-		nrg = list_first_entry(&resv->region_cache, struct file_region,
-					link);
-		list_del(&nrg->link);
+	/*
+	 * Check for sufficient descriptors in the cache to accommodate
+	 * this add operation. Note that actual_regions_needed may be greater
+	 * than in_regions_needed, as the resv_map may have been modified since
+	 * the region_chg call. In this case, we need to make sure that we
+	 * allocate extra entries, such that we have enough for all the
+	 * existing adds_in_progress, plus the excess needed for this
+	 * operation.
+	 */
+	if (actual_regions_needed > in_regions_needed &&
+	    resv->region_cache_count <
+		    resv->adds_in_progress +
+			    (actual_regions_needed - in_regions_needed)) {
+		/* region_add operation of range 1 should never need to
+		 * allocate file_region entries.
+		 */
+		VM_BUG_ON(t - f <= 1);
 
-		nrg->from = f;
-		nrg->to = t;
-		list_add(&nrg->link, rg->link.prev);
+		if (allocate_file_region_entries(
+			    resv, actual_regions_needed - in_regions_needed)) {
+			return -ENOMEM;
+		}
 
-		add += t - f;
-		goto out_locked;
+		goto retry;
 	}
 
-	add = add_reservation_in_range(resv, f, t, false);
+	add = add_reservation_in_range(resv, f, t, NULL, false);
+
+	resv->adds_in_progress -= in_regions_needed;
 
-out_locked:
-	resv->adds_in_progress--;
 	spin_unlock(&resv->lock);
 	VM_BUG_ON(add < 0);
 	return add;
@@ -358,46 +468,36 @@ out_locked:
  * call to region_add that will actually modify the reserve
  * map to add the specified range [f, t).  region_chg does
  * not change the number of huge pages represented by the
- * map.  A new file_region structure is added to the cache
- * as a placeholder, so that the subsequent region_add
- * call will have all the regions it needs and will not fail.
+ * map.  A number of new file_region structures is added to the cache as a
+ * placeholder, for the subsequent region_add call to use. At least 1
+ * file_region structure is added.
+ *
+ * out_regions_needed is the number of regions added to the
+ * resv->adds_in_progress.  This value needs to be provided to a follow up call
+ * to region_add or region_abort for proper accounting.
  *
  * Returns the number of huge pages that need to be added to the existing
  * reservation map for the range [f, t).  This number is greater or equal to
  * zero.  -ENOMEM is returned if a new file_region structure or cache entry
  * is needed and can not be allocated.
  */
-static long region_chg(struct resv_map *resv, long f, long t)
+static long region_chg(struct resv_map *resv, long f, long t,
+		       long *out_regions_needed)
 {
 	long chg = 0;
 
 	spin_lock(&resv->lock);
-retry_locked:
-	resv->adds_in_progress++;
 
-	/*
-	 * Check for sufficient descriptors in the cache to accommodate
-	 * the number of in progress add operations.
-	 */
-	if (resv->adds_in_progress > resv->region_cache_count) {
-		struct file_region *trg;
-
-		VM_BUG_ON(resv->adds_in_progress - resv->region_cache_count > 1);
-		/* Must drop lock to allocate a new descriptor. */
-		resv->adds_in_progress--;
-		spin_unlock(&resv->lock);
+	/* Count how many hugepages in this range are NOT respresented. */
+	chg = add_reservation_in_range(resv, f, t, out_regions_needed, true);
 
-		trg = kmalloc(sizeof(*trg), GFP_KERNEL);
-		if (!trg)
-			return -ENOMEM;
+	if (*out_regions_needed == 0)
+		*out_regions_needed = 1;
 
-		spin_lock(&resv->lock);
-		list_add(&trg->link, &resv->region_cache);
-		resv->region_cache_count++;
-		goto retry_locked;
-	}
+	if (allocate_file_region_entries(resv, *out_regions_needed))
+		return -ENOMEM;
 
-	chg = add_reservation_in_range(resv, f, t, true);
+	resv->adds_in_progress += *out_regions_needed;
 
 	spin_unlock(&resv->lock);
 	return chg;
@@ -408,17 +508,20 @@ retry_locked:
  * of the resv_map keeps track of the operations in progress between
  * calls to region_chg and region_add.  Operations are sometimes
  * aborted after the call to region_chg.  In such cases, region_abort
- * is called to decrement the adds_in_progress counter.
+ * is called to decrement the adds_in_progress counter. regions_needed
+ * is the value returned by the region_chg call, it is used to decrement
+ * the adds_in_progress counter.
  *
  * NOTE: The range arguments [f, t) are not needed or used in this
  * routine.  They are kept to make reading the calling code easier as
  * arguments will match the associated region_chg call.
  */
-static void region_abort(struct resv_map *resv, long f, long t)
+static void region_abort(struct resv_map *resv, long f, long t,
+			 long regions_needed)
 {
 	spin_lock(&resv->lock);
 	VM_BUG_ON(!resv->region_cache_count);
-	resv->adds_in_progress--;
+	resv->adds_in_progress -= regions_needed;
 	spin_unlock(&resv->lock);
 }
 
@@ -2004,6 +2107,7 @@ static long __vma_reservation_common(str
 	struct resv_map *resv;
 	pgoff_t idx;
 	long ret;
+	long dummy_out_regions_needed;
 
 	resv = vma_resv_map(vma);
 	if (!resv)
@@ -2012,20 +2116,29 @@ static long __vma_reservation_common(str
 	idx = vma_hugecache_offset(h, vma, addr);
 	switch (mode) {
 	case VMA_NEEDS_RESV:
-		ret = region_chg(resv, idx, idx + 1);
+		ret = region_chg(resv, idx, idx + 1, &dummy_out_regions_needed);
+		/* We assume that vma_reservation_* routines always operate on
+		 * 1 page, and that adding to resv map a 1 page entry can only
+		 * ever require 1 region.
+		 */
+		VM_BUG_ON(dummy_out_regions_needed != 1);
 		break;
 	case VMA_COMMIT_RESV:
-		ret = region_add(resv, idx, idx + 1);
+		ret = region_add(resv, idx, idx + 1, 1);
+		/* region_add calls of range 1 should never fail. */
+		VM_BUG_ON(ret < 0);
 		break;
 	case VMA_END_RESV:
-		region_abort(resv, idx, idx + 1);
+		region_abort(resv, idx, idx + 1, 1);
 		ret = 0;
 		break;
 	case VMA_ADD_RESV:
-		if (vma->vm_flags & VM_MAYSHARE)
-			ret = region_add(resv, idx, idx + 1);
-		else {
-			region_abort(resv, idx, idx + 1);
+		if (vma->vm_flags & VM_MAYSHARE) {
+			ret = region_add(resv, idx, idx + 1, 1);
+			/* region_add calls of range 1 should never fail. */
+			VM_BUG_ON(ret < 0);
+		} else {
+			region_abort(resv, idx, idx + 1, 1);
 			ret = region_del(resv, idx, idx + 1);
 		}
 		break;
@@ -4713,12 +4826,12 @@ int hugetlb_reserve_pages(struct inode *
 					struct vm_area_struct *vma,
 					vm_flags_t vm_flags)
 {
-	long ret, chg;
+	long ret, chg, add = -1;
 	struct hstate *h = hstate_inode(inode);
 	struct hugepage_subpool *spool = subpool_inode(inode);
 	struct resv_map *resv_map;
 	struct hugetlb_cgroup *h_cg;
-	long gbl_reserve;
+	long gbl_reserve, regions_needed = 0;
 
 	/* This should never happen */
 	if (from > to) {
@@ -4748,7 +4861,7 @@ int hugetlb_reserve_pages(struct inode *
 		 */
 		resv_map = inode_resv_map(inode);
 
-		chg = region_chg(resv_map, from, to);
+		chg = region_chg(resv_map, from, to, &regions_needed);
 
 	} else {
 		/* Private mapping. */
@@ -4814,9 +4927,14 @@ int hugetlb_reserve_pages(struct inode *
 	 * else has to be done for private mappings here
 	 */
 	if (!vma || vma->vm_flags & VM_MAYSHARE) {
-		long add = region_add(resv_map, from, to);
+		add = region_add(resv_map, from, to, regions_needed);
 
-		if (unlikely(chg > add)) {
+		if (unlikely(add < 0)) {
+			hugetlb_acct_memory(h, -gbl_reserve);
+			/* put back original number of pages, chg */
+			(void)hugepage_subpool_put_pages(spool, chg);
+			goto out_err;
+		} else if (unlikely(chg > add)) {
 			/*
 			 * pages in this range were added to the reserve
 			 * map between region_chg and region_add.  This
@@ -4834,9 +4952,11 @@ int hugetlb_reserve_pages(struct inode *
 	return 0;
 out_err:
 	if (!vma || vma->vm_flags & VM_MAYSHARE)
-		/* Don't call region_abort if region_chg failed */
-		if (chg >= 0)
-			region_abort(resv_map, from, to);
+		/* Only call region_abort if the region_chg succeeded but the
+		 * region_add failed or didn't run.
+		 */
+		if (chg >= 0 && add < 0)
+			region_abort(resv_map, from, to, regions_needed);
 	if (vma && is_vma_resv_set(vma, HPAGE_RESV_OWNER))
 		kref_put(&resv_map->refs, resv_map_release);
 	return ret;
_


  parent reply	other threads:[~2020-04-02  4:11 UTC|newest]

Thread overview: 163+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-02  4:01 incoming Andrew Morton
2020-04-02  4:02 ` [patch 001/155] tools/accounting/getdelays.c: fix netlink attribute length Andrew Morton
2020-04-02  4:02 ` [patch 002/155] kthread: mark timer used by delayed kthread works as IRQ safe Andrew Morton
2020-04-02  4:03 ` [patch 003/155] asm-generic: make more kernel-space headers mandatory Andrew Morton
2020-04-02  4:03 ` [patch 004/155] scripts/spelling.txt: add syfs/sysfs pattern Andrew Morton
2020-04-02  4:03 ` [patch 005/155] scripts/spelling.txt: add more spellings to spelling.txt Andrew Morton
2020-04-02  4:03 ` [patch 006/155] ocfs2: remove FS_OCFS2_NM Andrew Morton
2020-04-02  4:03 ` [patch 007/155] ocfs2: remove unused macros Andrew Morton
2020-04-02  4:03 ` [patch 008/155] ocfs2: use OCFS2_SEC_BITS in macro Andrew Morton
2020-04-02  4:03 ` [patch 009/155] ocfs2: remove dlm_lock_is_remote Andrew Morton
2020-04-02  4:03 ` [patch 010/155] ocfs2: there is no need to log twice in several functions Andrew Morton
2020-04-02  4:03 ` [patch 011/155] ocfs2: correct annotation from "l_next_rec" to "l_next_free_rec" Andrew Morton
2020-04-02  4:03 ` [patch 012/155] ocfs2: remove useless err Andrew Morton
2020-04-02  4:03 ` [patch 013/155] ocfs2: add missing annotations for ocfs2_refcount_cache_lock() and ocfs2_refcount_cache_unlock() Andrew Morton
2020-04-02  4:03 ` [patch 014/155] ocfs2: replace zero-length array with flexible-array member Andrew Morton
2020-04-02  4:03 ` [patch 015/155] ocfs2: cluster: " Andrew Morton
2020-04-02  4:03 ` [patch 016/155] ocfs2: dlm: " Andrew Morton
2020-04-02  4:03 ` [patch 017/155] ocfs2: ocfs2_fs.h: " Andrew Morton
2020-04-02  4:04 ` [patch 018/155] ocfs2: roll back the reference count modification of the parent directory if an error occurs Andrew Morton
2020-04-02  4:04 ` [patch 019/155] ocfs2: use scnprintf() for avoiding potential buffer overflow Andrew Morton
2020-04-02  4:04 ` [patch 020/155] ocfs2: use memalloc_nofs_save instead of memalloc_noio_save Andrew Morton
2020-04-02  4:04 ` [patch 021/155] fs_parse: remove pr_notice() about each validation Andrew Morton
2020-04-02  4:04 ` [patch 022/155] mm/slub.c: replace cpu_slab->partial with wrapped APIs Andrew Morton
2020-04-02  4:04 ` [patch 023/155] mm/slub.c: replace kmem_cache->cpu_partial " Andrew Morton
2020-04-02  4:04 ` [patch 024/155] slub: improve bit diffusion for freelist ptr obfuscation Andrew Morton
2020-04-02  4:04 ` [patch 025/155] slub: relocate freelist pointer to middle of object Andrew Morton
2020-04-15 16:47   ` Marco Elver
2020-04-15 17:07     ` Kees Cook
2020-04-15 18:00     ` Kees Cook
2020-04-02  4:04 ` [patch 026/155] revert "topology: add support for node_to_mem_node() to determine the fallback node" Andrew Morton
2020-04-02  4:04 ` [patch 027/155] mm/kmemleak.c: use address-of operator on section symbols Andrew Morton
2020-04-02  4:04 ` [patch 028/155] mm/Makefile: disable KCSAN for kmemleak Andrew Morton
2020-04-02  4:04 ` [patch 029/155] mm/filemap.c: don't bother dropping mmap_sem for zero size readahead Andrew Morton
2020-04-02  4:04 ` [patch 030/155] mm/page-writeback.c: write_cache_pages(): deduplicate identical checks Andrew Morton
2020-04-02  4:04 ` [patch 031/155] mm/filemap.c: clear page error before actual read Andrew Morton
2020-04-02  4:04 ` [patch 032/155] mm/filemap.c: remove unused argument from shrink_readahead_size_eio() Andrew Morton
2020-04-02  4:04 ` [patch 033/155] mm/filemap.c: use vm_fault error code directly Andrew Morton
2020-04-02  4:04 ` [patch 034/155] include/linux/pagemap.h: rename arguments to find_subpage Andrew Morton
2020-04-02  4:05 ` [patch 035/155] mm/page-writeback.c: use VM_BUG_ON_PAGE in clear_page_dirty_for_io Andrew Morton
2020-04-02  4:05 ` [patch 036/155] mm/filemap.c: unexport find_get_entry Andrew Morton
2020-04-02  4:05 ` [patch 037/155] mm/filemap.c: rewrite pagecache_get_page documentation Andrew Morton
2020-04-02  4:05 ` [patch 038/155] mm/gup: split get_user_pages_remote() into two routines Andrew Morton
2020-04-02  4:05 ` [patch 039/155] mm/gup: pass a flags arg to __gup_device_* functions Andrew Morton
2020-04-02  4:05 ` [patch 040/155] mm: introduce page_ref_sub_return() Andrew Morton
2020-04-02  4:05 ` [patch 041/155] mm/gup: pass gup flags to two more routines Andrew Morton
2020-04-02  4:05 ` [patch 042/155] mm/gup: require FOLL_GET for get_user_pages_fast() Andrew Morton
2020-04-02  4:05 ` [patch 043/155] mm/gup: track FOLL_PIN pages Andrew Morton
2020-04-09  6:08   ` Tetsuo Handa
2020-04-09  6:38     ` John Hubbard
2020-04-09  7:20       ` Tetsuo Handa
2020-04-09  7:46         ` John Hubbard
2020-04-02  4:05 ` [patch 044/155] mm/gup: page->hpage_pinned_refcount: exact pin counts for huge pages Andrew Morton
2020-04-02  4:05 ` [patch 045/155] mm/gup: /proc/vmstat: pin_user_pages (FOLL_PIN) reporting Andrew Morton
2020-04-02  4:05 ` [patch 046/155] mm/gup_benchmark: support pin_user_pages() and related calls Andrew Morton
2020-04-02  4:05 ` [patch 047/155] selftests/vm: run_vmtests: invoke gup_benchmark with basic FOLL_PIN coverage Andrew Morton
2020-04-02  4:05 ` [patch 048/155] mm: improve dump_page() for compound pages Andrew Morton
2020-04-02  4:05 ` [patch 049/155] mm: dump_page(): additional diagnostics for huge pinned pages Andrew Morton
2020-04-02  4:05 ` [patch 050/155] mm/gup/writeback: add callbacks for inaccessible pages Andrew Morton
2020-04-02  4:06 ` [patch 051/155] mm/gup: rename nr as nr_pinned in get_user_pages_fast() Andrew Morton
2020-04-02  4:06 ` [patch 052/155] mm/gup: fix omission of check on FOLL_LONGTERM in gup fast path Andrew Morton
2020-04-02  4:06 ` [patch 053/155] mm/swapfile.c: fix comments for swapcache_prepare Andrew Morton
2020-04-02  4:06 ` [patch 054/155] mm/swap.c: not necessary to export __pagevec_lru_add() Andrew Morton
2020-04-02  4:06 ` [patch 055/155] mm/swapfile: fix data races in try_to_unuse() Andrew Morton
2020-04-02  4:06 ` [patch 056/155] mm/swap_slots.c: assign|reset cache slot by value directly Andrew Morton
2020-04-02  4:06 ` [patch 057/155] mm: swap: make page_evictable() inline Andrew Morton
2020-04-02  4:06 ` [patch 058/155] mm: swap: use smp_mb__after_atomic() to order LRU bit set Andrew Morton
2020-04-02  4:06 ` [patch 059/155] mm/swap_state.c: use the same way to count page in [add_to|delete_from]_swap_cache Andrew Morton
2020-04-02  4:06 ` [patch 060/155] mm, memcg: fix build error around the usage of kmem_caches Andrew Morton
2020-04-02  4:06 ` [patch 061/155] mm/memcontrol.c: allocate shrinker_map on appropriate NUMA node Andrew Morton
2020-04-02  4:06 ` [patch 062/155] mm: memcg/slab: use mem_cgroup_from_obj() Andrew Morton
2020-04-02  4:06 ` [patch 063/155] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments Andrew Morton
2020-04-02  4:06 ` [patch 064/155] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments Andrew Morton
2020-04-02  4:06 ` [patch 065/155] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page() Andrew Morton
2020-04-02  4:06 ` [patch 066/155] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg() Andrew Morton
2020-04-02  4:06 ` [patch 067/155] mm: memcg/slab: cache page number in memcg_(un)charge_slab() Andrew Morton
2020-04-02  4:06 ` [patch 068/155] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge() Andrew Morton
2020-04-02  4:07 ` [patch 069/155] mm: memcontrol: fix memory.low proportional distribution Andrew Morton
2020-04-02  4:07 ` [patch 070/155] mm: memcontrol: clean up and document effective low/min calculations Andrew Morton
2020-04-02  4:07 ` [patch 071/155] mm: memcontrol: recursive memory.low protection Andrew Morton
2020-04-02  4:07 ` [patch 072/155] memcg: css_tryget_online cleanups Andrew Morton
2020-04-02  4:07 ` [patch 073/155] mm/memcontrol.c: make mem_cgroup_id_get_many() __maybe_unused Andrew Morton
2020-04-02  4:07 ` [patch 074/155] mm, memcg: prevent memory.high load/store tearing Andrew Morton
2020-04-02  4:07 ` [patch 075/155] mm, memcg: prevent memory.max load tearing Andrew Morton
2020-04-02  4:07 ` [patch 076/155] mm, memcg: prevent memory.low load/store tearing Andrew Morton
2020-04-02  4:07 ` [patch 077/155] mm, memcg: prevent memory.min " Andrew Morton
2020-04-02  4:07 ` [patch 078/155] mm, memcg: prevent memory.swap.max load tearing Andrew Morton
2020-04-02  4:07 ` [patch 079/155] mm, memcg: prevent mem_cgroup_protected store tearing Andrew Morton
2020-04-02  4:07 ` [patch 080/155] mm: memcg: make memory.oom.group tolerable to task migration Andrew Morton
2020-04-02  4:07 ` [patch 081/155] mm/mapping_dirty_helpers: update huge page-table entry callbacks Andrew Morton
2020-04-02  4:07 ` [patch 082/155] mm/vma: move VM_NO_KHUGEPAGED into generic header Andrew Morton
2020-04-02  4:07 ` [patch 083/155] mm/vma: make vma_is_foreign() available for general use Andrew Morton
2020-04-02  4:07 ` [patch 084/155] mm/vma: make is_vma_temporary_stack() " Andrew Morton
2020-04-02  4:07 ` [patch 085/155] mm: add pagemap.h to the fine documentation Andrew Morton
2020-04-02  4:07 ` [patch 086/155] mm/gup: rename "nonblocking" to "locked" where proper Andrew Morton
2020-04-02  4:08 ` [patch 087/155] mm/gup: fix __get_user_pages() on fault retry of hugetlb Andrew Morton
2020-04-02  4:08 ` [patch 088/155] mm: introduce fault_signal_pending() Andrew Morton
2020-04-02  4:08 ` [patch 089/155] x86/mm: use helper fault_signal_pending() Andrew Morton
2020-04-02  4:08 ` [patch 090/155] arc/mm: " Andrew Morton
2020-04-02  4:08 ` [patch 091/155] arm64/mm: " Andrew Morton
2020-04-02  4:08 ` [patch 092/155] powerpc/mm: " Andrew Morton
2020-04-02  4:08 ` [patch 093/155] sh/mm: " Andrew Morton
2020-04-02  4:08 ` [patch 094/155] mm: return faster for non-fatal signals in user mode faults Andrew Morton
2020-04-02  4:08 ` [patch 095/155] userfaultfd: don't retake mmap_sem to emulate NOPAGE Andrew Morton
2020-04-02  4:08 ` [patch 096/155] mm: introduce FAULT_FLAG_DEFAULT Andrew Morton
2020-04-02  4:08 ` [patch 097/155] mm: introduce FAULT_FLAG_INTERRUPTIBLE Andrew Morton
2020-04-02  4:08 ` [patch 098/155] mm: allow VM_FAULT_RETRY for multiple times Andrew Morton
2020-04-02  4:08 ` [patch 099/155] mm/gup: " Andrew Morton
2020-04-02  4:08 ` [patch 100/155] mm/gup: allow to react to fatal signals Andrew Morton
2020-04-02  4:09 ` [patch 101/155] mm/userfaultfd: honor FAULT_FLAG_KILLABLE in fault path Andrew Morton
2020-04-02  4:09 ` [patch 102/155] mm: clarify a confusing comment for remap_pfn_range() Andrew Morton
2020-04-02  4:09 ` [patch 103/155] mm/memory.c: clarify a confusing comment for vm_iomap_memory Andrew Morton
2020-04-02  4:09 ` [patch 104/155] mmap: remove inline of vm_unmapped_area Andrew Morton
2020-04-02  4:09 ` [patch 105/155] mm: mmap: add trace point " Andrew Morton
2020-04-02  4:09 ` [patch 106/155] mm/mremap: add MREMAP_DONTUNMAP to mremap() Andrew Morton
2020-04-02  4:09 ` [patch 107/155] selftests: add MREMAP_DONTUNMAP selftest Andrew Morton
2020-04-02  4:09 ` [patch 108/155] mm/sparsemem: get address to page struct instead of address to pfn Andrew Morton
2020-04-02  4:09 ` [patch 109/155] mm/sparse: rename pfn_present() to pfn_in_present_section() Andrew Morton
2020-04-02  4:09 ` [patch 110/155] mm/sparse.c: use kvmalloc/kvfree to alloc/free memmap for the classic sparse Andrew Morton
2020-04-02  4:09 ` [patch 111/155] mm/sparse.c: allocate memmap preferring the given node Andrew Morton
2020-04-02  4:09 ` [patch 112/155] kasan: detect negative size in memory operation function Andrew Morton
2020-04-02  4:09 ` [patch 113/155] kasan: add test for invalid size in memmove Andrew Morton
2020-04-02  4:09 ` [patch 114/155] mm/page_alloc: increase default min_free_kbytes bound Andrew Morton
2020-04-02  4:09 ` [patch 115/155] mm, pagealloc: micro-optimisation: save two branches on hot page allocation path Andrew Morton
2020-04-02  4:09 ` [patch 116/155] mm/page_alloc.c: use free_area_empty() instead of open-coding Andrew Morton
2020-04-02  4:09 ` [patch 117/155] mm/page_alloc.c: micro-optimisation Remove unnecessary branch Andrew Morton
2020-04-02  4:09 ` [patch 118/155] mm/page_alloc: simplify page_is_buddy() for better code readability Andrew Morton
2020-04-02  4:09 ` [patch 119/155] mm: vmpressure: don't need call kfree if kstrndup fails Andrew Morton
2020-04-02  4:10 ` [patch 120/155] mm: vmpressure: use mem_cgroup_is_root API Andrew Morton
2020-04-02  4:10 ` [patch 121/155] mm: vmscan: replace open codings to NUMA_NO_NODE Andrew Morton
2020-04-02  4:10 ` [patch 122/155] mm/vmscan.c: remove cpu online notification for now Andrew Morton
2020-04-02  4:10 ` [patch 123/155] mm/vmscan.c: fix data races using kswapd_classzone_idx Andrew Morton
2020-04-02  4:10 ` [patch 124/155] mm/vmscan.c: clean code by removing unnecessary assignment Andrew Morton
2020-04-02  4:10 ` [patch 125/155] mm/vmscan.c: make may_enter_fs bool in shrink_page_list() Andrew Morton
2020-04-02  4:10 ` [patch 126/155] mm/vmscan.c: do_try_to_free_pages(): clean code by removing unnecessary assignment Andrew Morton
2020-04-02  4:10 ` [patch 127/155] selftests: vm: drop dependencies on page flags from mlock2 tests Andrew Morton
2020-04-02  4:10 ` [patch 128/155] mm,compaction,cma: add alloc_contig flag to compact_control Andrew Morton
2020-04-02  4:10 ` [patch 129/155] mm,thp,compaction,cma: allow THP migration for CMA allocations Andrew Morton
2020-04-02  4:10 ` [patch 130/155] mm, compaction: fully assume capture is not NULL in compact_zone_order() Andrew Morton
2020-04-02  4:10 ` [patch 131/155] mm/compaction: really limit compact_unevictable_allowed to 0 and 1 Andrew Morton
2020-04-02  4:10 ` [patch 132/155] mm/compaction: Disable compact_unevictable_allowed on RT Andrew Morton
2020-04-02  4:10 ` [patch 133/155] mm/compaction.c: clean code by removing unnecessary assignment Andrew Morton
2020-04-02  4:10 ` [patch 134/155] mm/mempolicy: support MPOL_MF_STRICT for huge page mapping Andrew Morton
2020-04-02  4:10 ` [patch 135/155] mm/mempolicy: check hugepage migration is supported by arch in vma_migratable() Andrew Morton
2020-04-02  4:10 ` [patch 136/155] mm: mempolicy: use VM_BUG_ON_VMA in queue_pages_test_walk() Andrew Morton
2020-04-02  4:10 ` [patch 137/155] mm: mempolicy: require at least one nodeid for MPOL_PREFERRED Andrew Morton
2020-04-02  4:11 ` [patch 138/155] mm/memblock.c: remove redundant assignment to variable max_addr Andrew Morton
2020-04-02  4:11 ` [patch 139/155] hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization Andrew Morton
2020-04-02  4:11 ` [patch 140/155] hugetlbfs: Use i_mmap_rwsem to address page fault/truncate race Andrew Morton
2020-04-02  4:11 ` [patch 141/155] hugetlb_cgroup: add hugetlb_cgroup reservation counter Andrew Morton
2020-04-02  4:11 ` [patch 142/155] hugetlb_cgroup: add interface for charge/uncharge hugetlb reservations Andrew Morton
2020-04-02  4:11 ` [patch 143/155] mm/hugetlb_cgroup: fix hugetlb_cgroup migration Andrew Morton
2020-04-02  4:11 ` [patch 144/155] hugetlb_cgroup: add reservation accounting for private mappings Andrew Morton
2020-04-02  4:11 ` Andrew Morton [this message]
2020-04-02  4:11 ` [patch 146/155] hugetlb_cgroup: add accounting for shared mappings Andrew Morton
2020-04-02  4:11 ` [patch 147/155] hugetlb_cgroup: support noreserve mappings Andrew Morton
2020-04-02  4:11 ` [patch 148/155] hugetlb: support file_region coalescing again Andrew Morton
2020-04-02  4:11 ` [patch 149/155] hugetlb_cgroup: add hugetlb_cgroup reservation tests Andrew Morton
2020-04-02  4:11 ` [patch 150/155] hugetlb_cgroup: add hugetlb_cgroup reservation docs Andrew Morton
2020-04-02  4:11 ` [patch 151/155] mm/hugetlb.c: clean code by removing unnecessary initialization Andrew Morton
2020-04-02  4:11 ` [patch 152/155] mm/hugetlb: remove unnecessary memory fetch in PageHeadHuge() Andrew Morton
2020-04-02  4:11 ` [patch 153/155] selftests/vm: fix map_hugetlb length used for testing read and write Andrew Morton
2020-04-02  4:11 ` [patch 154/155] mm/hugetlb: fix build failure with HUGETLB_PAGE but not HUGEBTLBFS Andrew Morton
2020-04-02  4:11 ` [patch 155/155] include/linux/huge_mm.h: check PageTail in hpage_nr_pages even when !THP Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200402041125.IGYRnb7n6%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=gthelen@google.com \
    --cc=linux-mm@kvack.org \
    --cc=miguel.ojeda.sandonis@gmail.com \
    --cc=mike.kravetz@oracle.com \
    --cc=mm-commits@vger.kernel.org \
    --cc=rientjes@google.com \
    --cc=sandipan@linux.ibm.com \
    --cc=shakeelb@google.com \
    --cc=shuah@kernel.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).