mm-commits.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: almasrymina@google.com, gthelen@google.com,
	mike.kravetz@oracle.com, mm-commits@vger.kernel.org,
	rientjes@google.com, sandipan@linux.ibm.com, shakeelb@google.com,
	shuah@kernel.org
Subject: + hugetlb-disable-region_add-file_region-coalescing.patch added to -mm tree
Date: Tue, 11 Feb 2020 15:19:18 -0800	[thread overview]
Message-ID: <20200211231918.a2V8kHC_j%akpm@linux-foundation.org> (raw)
In-Reply-To: <20200203173311.6269a8be06a05e5a4aa08a93@linux-foundation.org>


The patch titled
     Subject: hugetlb: disable region_add file_region coalescing
has been added to the -mm tree.  Its filename is
     hugetlb-disable-region_add-file_region-coalescing.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/hugetlb-disable-region_add-file_region-coalescing.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/hugetlb-disable-region_add-file_region-coalescing.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Mina Almasry <almasrymina@google.com>
Subject: hugetlb: disable region_add file_region coalescing

A follow up patch in this series adds hugetlb cgroup uncharge info the
file_region entries in resv->regions.  The cgroup uncharge info may differ
for different regions, so they can no longer be coalesced at region_add
time.  So, disable region coalescing in region_add in this patch.

Behavior change:

Say a resv_map exists like this [0->1], [2->3], and [5->6].

Then a region_chg/add call comes in region_chg/add(f=0, t=5).

Old code would generate resv->regions: [0->5], [5->6].
New code would generate resv->regions: [0->1], [1->2], [2->3], [3->5],
[5->6].

Special care needs to be taken to handle the resv->adds_in_progress
variable correctly.  In the past, only 1 region would be added for every
region_chg and region_add call.  But now, each call may add multiple
regions, so we can no longer increment adds_in_progress by 1 in
region_chg, or decrement adds_in_progress by 1 after region_add or
region_abort.  Instead, region_chg calls add_reservation_in_range() to
count the number of regions needed and allocates those, and that info is
passed to region_add and region_abort to decrement adds_in_progress
correctly.

We've also modified the assumption that region_add after region_chg never
fails.  region_chg now pre-allocates at least 1 region for region_add.  If
region_add needs more regions than region_chg has allocated for it, then
it may fail.

Link: http://lkml.kernel.org/r/20200211213128.73302-4-almasrymina@google.com
Signed-off-by: Mina Almasry <almasrymina@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Sandipan Das <sandipan@linux.ibm.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/hugetlb.c |  340 +++++++++++++++++++++++++++++++++----------------
 1 file changed, 230 insertions(+), 110 deletions(-)

--- a/mm/hugetlb.c~hugetlb-disable-region_add-file_region-coalescing
+++ a/mm/hugetlb.c
@@ -245,110 +245,180 @@ struct file_region {
 	long to;
 };
 
+/* Helper that removes a struct file_region from the resv_map cache and returns
+ * it for use.
+ */
+static struct file_region *
+get_file_region_entry_from_cache(struct resv_map *resv, long from, long to)
+{
+	struct file_region *nrg = NULL;
+
+	VM_BUG_ON(resv->region_cache_count <= 0);
+
+	resv->region_cache_count--;
+	nrg = list_first_entry(&resv->region_cache, struct file_region, link);
+	VM_BUG_ON(!nrg);
+	list_del(&nrg->link);
+
+	nrg->from = from;
+	nrg->to = to;
+
+	return nrg;
+}
+
 /* Must be called with resv->lock held. Calling this with count_only == true
  * will count the number of pages to be added but will not modify the linked
- * list.
+ * list. If regions_needed != NULL and count_only == true, then regions_needed
+ * will indicate the number of file_regions needed in the cache to carry out to
+ * add the regions for this range.
  */
 static long add_reservation_in_range(struct resv_map *resv, long f, long t,
-				     bool count_only)
+				     long *regions_needed, bool count_only)
 {
-	long chg = 0;
+	long add = 0;
 	struct list_head *head = &resv->regions;
+	long last_accounted_offset = f;
 	struct file_region *rg = NULL, *trg = NULL, *nrg = NULL;
 
-	/* Locate the region we are before or in. */
-	list_for_each_entry(rg, head, link)
-		if (f <= rg->to)
-			break;
+	if (regions_needed)
+		*regions_needed = 0;
 
-	/* Round our left edge to the current segment if it encloses us. */
-	if (f > rg->from)
-		f = rg->from;
-
-	chg = t - f;
-
-	/* Check for and consume any regions we now overlap with. */
-	nrg = rg;
-	list_for_each_entry_safe(rg, trg, rg->link.prev, link) {
-		if (&rg->link == head)
-			break;
+	/* In this loop, we essentially handle an entry for the range
+	 * [last_accounted_offset, rg->from), at every iteration, with some
+	 * bounds checking.
+	 */
+	list_for_each_entry_safe(rg, trg, head, link) {
+		/* Skip irrelevant regions that start before our range. */
+		if (rg->from < f) {
+			/* If this region ends after the last accounted offset,
+			 * then we need to update last_accounted_offset.
+			 */
+			if (rg->to > last_accounted_offset)
+				last_accounted_offset = rg->to;
+			continue;
+		}
+
+		/* When we find a region that starts beyond our range, we've
+		 * finished.
+		 */
 		if (rg->from > t)
 			break;
 
-		/* We overlap with this area, if it extends further than
-		 * us then we must extend ourselves.  Account for its
-		 * existing reservation.
+		/* Add an entry for last_accounted_offset -> rg->from, and
+		 * update last_accounted_offset.
 		 */
-		if (rg->to > t) {
-			chg += rg->to - t;
-			t = rg->to;
-		}
-		chg -= rg->to - rg->from;
-
-		if (!count_only && rg != nrg) {
-			list_del(&rg->link);
-			kfree(rg);
-		}
+		if (rg->from > last_accounted_offset) {
+			add += rg->from - last_accounted_offset;
+			if (!count_only) {
+				nrg = get_file_region_entry_from_cache(
+					resv, last_accounted_offset, rg->from);
+				list_add(&nrg->link, rg->link.prev);
+			} else if (regions_needed)
+				*regions_needed += 1;
+		}
+
+		last_accounted_offset = rg->to;
+	}
+
+	/* Handle the case where our range extends beyond
+	 * last_accounted_offset.
+	 */
+	if (last_accounted_offset < t) {
+		add += t - last_accounted_offset;
+		if (!count_only) {
+			nrg = get_file_region_entry_from_cache(
+				resv, last_accounted_offset, t);
+			list_add(&nrg->link, rg->link.prev);
+		} else if (regions_needed)
+			*regions_needed += 1;
 	}
 
-	if (!count_only) {
-		nrg->from = f;
-		nrg->to = t;
-	}
-
-	return chg;
+	VM_BUG_ON(add < 0);
+	return add;
 }
 
 /*
  * Add the huge page range represented by [f, t) to the reserve
- * map.  Existing regions will be expanded to accommodate the specified
- * range, or a region will be taken from the cache.  Sufficient regions
- * must exist in the cache due to the previous call to region_chg with
- * the same range.
+ * map.  Regions will be taken from the cache to fill in this range.
+ * Sufficient regions should exist in the cache due to the previous
+ * call to region_chg with the same range, but in some cases the cache will not
+ * have sufficient entries due to races with other code doing region_add or
+ * region_del.  The extra needed entries will be allocated.
  *
- * Return the number of new huge pages added to the map.  This
- * number is greater than or equal to zero.
- */
-static long region_add(struct resv_map *resv, long f, long t)
-{
-	struct list_head *head = &resv->regions;
-	struct file_region *rg, *nrg;
-	long add = 0;
+ * regions_needed is the out value provided by a previous call to region_chg.
+ *
+ * Return the number of new huge pages added to the map.  This number is greater
+ * than or equal to zero.  If file_region entries needed to be allocated for
+ * this operation and we were not able to allocate, it ruturns -ENOMEM.
+ * region_add of regions of length 1 never allocate file_regions and cannot
+ * fail; region_chg will always allocate at least 1 entry and a region_add for
+ * 1 page will only require at most 1 entry.
+ */
+static long region_add(struct resv_map *resv, long f, long t,
+		       long in_regions_needed)
+{
+	long add = 0, actual_regions_needed = 0, i = 0;
+	struct file_region *trg = NULL, *rg = NULL;
+	struct list_head allocated_regions;
+
+	INIT_LIST_HEAD(&allocated_regions);
 
 	spin_lock(&resv->lock);
-	/* Locate the region we are either in or before. */
-	list_for_each_entry(rg, head, link)
-		if (f <= rg->to)
-			break;
+retry:
+
+	/* Count how many regions are actually needed to execute this add. */
+	add_reservation_in_range(resv, f, t, &actual_regions_needed, true);
 
 	/*
-	 * If no region exists which can be expanded to include the
-	 * specified range, pull a region descriptor from the cache
-	 * and use it for this range.
-	 */
-	if (&rg->link == head || t < rg->from) {
-		VM_BUG_ON(resv->region_cache_count <= 0);
+	 * Check for sufficient descriptors in the cache to accommodate
+	 * this add operation. Note that actual_regions_needed may be greater
+	 * than in_regions_needed. In this case, we need to make sure that we
+	 * allocate extra entries, such that we have enough for all the
+	 * existing adds_in_progress, plus the excess needed for this
+	 * operation.
+	 */
+	if (resv->region_cache_count <
+	    resv->adds_in_progress +
+		    (actual_regions_needed - in_regions_needed)) {
+		/* region_add operation of range 1 should never need to
+		 * allocate file_region entries.
+		 */
+		VM_BUG_ON(t - f <= 1);
 
-		resv->region_cache_count--;
-		nrg = list_first_entry(&resv->region_cache, struct file_region,
-					link);
-		list_del(&nrg->link);
+		/* Must drop lock to allocate a new descriptor. */
+		spin_unlock(&resv->lock);
+		for (i = 0; i < (actual_regions_needed - in_regions_needed);
+		     i++) {
+			trg = kmalloc(sizeof(*trg), GFP_KERNEL);
+			if (!trg)
+				goto out_of_memory;
+			list_add(&trg->link, &allocated_regions);
+		}
+		spin_lock(&resv->lock);
 
-		nrg->from = f;
-		nrg->to = t;
-		list_add(&nrg->link, rg->link.prev);
+		list_for_each_entry_safe(rg, trg, &allocated_regions, link) {
+			list_del(&rg->link);
+			list_add(&rg->link, &resv->region_cache);
+			resv->region_cache_count++;
+		}
 
-		add += t - f;
-		goto out_locked;
+		goto retry;
 	}
 
-	add = add_reservation_in_range(resv, f, t, false);
+	add = add_reservation_in_range(resv, f, t, NULL, false);
+
+	resv->adds_in_progress -= in_regions_needed;
 
-out_locked:
-	resv->adds_in_progress--;
 	spin_unlock(&resv->lock);
 	VM_BUG_ON(add < 0);
 	return add;
+
+out_of_memory:
+	list_for_each_entry_safe(rg, trg, &allocated_regions, link) {
+		list_del(&rg->link);
+		kfree(rg);
+	}
+	return -ENOMEM;
 }
 
 /*
@@ -358,49 +428,79 @@ out_locked:
  * call to region_add that will actually modify the reserve
  * map to add the specified range [f, t).  region_chg does
  * not change the number of huge pages represented by the
- * map.  A new file_region structure is added to the cache
- * as a placeholder, so that the subsequent region_add
- * call will have all the regions it needs and will not fail.
+ * map.  A number of new file_region structures is added to the cache as a
+ * placeholder, for the subsequent region_add call to use. At least 1
+ * file_region structure is added.
+ *
+ * out_regions_needed is the number of regions added to the
+ * resv->adds_in_progress.  This value needs to be provided to a follow up call
+ * to region_add or region_abort for proper accounting.
  *
  * Returns the number of huge pages that need to be added to the existing
  * reservation map for the range [f, t).  This number is greater or equal to
  * zero.  -ENOMEM is returned if a new file_region structure or cache entry
  * is needed and can not be allocated.
  */
-static long region_chg(struct resv_map *resv, long f, long t)
+static long region_chg(struct resv_map *resv, long f, long t,
+		       long *out_regions_needed)
 {
-	long chg = 0;
+	struct file_region *trg = NULL, *rg = NULL;
+	long chg = 0, i = 0, to_allocate = 0;
+	struct list_head allocated_regions;
+
+	INIT_LIST_HEAD(&allocated_regions);
 
 	spin_lock(&resv->lock);
-retry_locked:
-	resv->adds_in_progress++;
+
+	/* Count how many hugepages in this range are NOT respresented. */
+	chg = add_reservation_in_range(resv, f, t, out_regions_needed, true);
+
+	if (*out_regions_needed == 0)
+		*out_regions_needed = 1;
+
+	resv->adds_in_progress += *out_regions_needed;
 
 	/*
 	 * Check for sufficient descriptors in the cache to accommodate
 	 * the number of in progress add operations.
 	 */
-	if (resv->adds_in_progress > resv->region_cache_count) {
-		struct file_region *trg;
+	while (resv->region_cache_count < resv->adds_in_progress) {
+		to_allocate = resv->adds_in_progress - resv->region_cache_count;
 
-		VM_BUG_ON(resv->adds_in_progress - resv->region_cache_count > 1);
-		/* Must drop lock to allocate a new descriptor. */
-		resv->adds_in_progress--;
+		/* Must drop lock to allocate a new descriptor. Note that even
+		 * though we drop the lock here, we do not make another call to
+		 * add_reservation_in_range after re-acquiring the lock.
+		 * Essentially this branch makes sure that we have enough
+		 * descriptors in the cache as suggested by the first call to
+		 * add_reservation_in_range. If more regions turn out to be
+		 * required, region_add will deal with it.
+		 */
 		spin_unlock(&resv->lock);
-
-		trg = kmalloc(sizeof(*trg), GFP_KERNEL);
-		if (!trg)
-			return -ENOMEM;
+		for (i = 0; i < to_allocate; i++) {
+			trg = kmalloc(sizeof(*trg), GFP_KERNEL);
+			if (!trg)
+				goto out_of_memory;
+			list_add(&trg->link, &allocated_regions);
+		}
 
 		spin_lock(&resv->lock);
-		list_add(&trg->link, &resv->region_cache);
-		resv->region_cache_count++;
-		goto retry_locked;
-	}
 
-	chg = add_reservation_in_range(resv, f, t, true);
+		list_for_each_entry_safe(rg, trg, &allocated_regions, link) {
+			list_del(&rg->link);
+			list_add(&rg->link, &resv->region_cache);
+			resv->region_cache_count++;
+		}
+	}
 
 	spin_unlock(&resv->lock);
 	return chg;
+
+out_of_memory:
+	list_for_each_entry_safe(rg, trg, &allocated_regions, link) {
+		list_del(&rg->link);
+		kfree(rg);
+	}
+	return -ENOMEM;
 }
 
 /*
@@ -408,17 +508,20 @@ retry_locked:
  * of the resv_map keeps track of the operations in progress between
  * calls to region_chg and region_add.  Operations are sometimes
  * aborted after the call to region_chg.  In such cases, region_abort
- * is called to decrement the adds_in_progress counter.
+ * is called to decrement the adds_in_progress counter. regions_needed
+ * is the value returned by the region_chg call, it is used to decrement
+ * the adds_in_progress counter.
  *
  * NOTE: The range arguments [f, t) are not needed or used in this
  * routine.  They are kept to make reading the calling code easier as
  * arguments will match the associated region_chg call.
  */
-static void region_abort(struct resv_map *resv, long f, long t)
+static void region_abort(struct resv_map *resv, long f, long t,
+			 long regions_needed)
 {
 	spin_lock(&resv->lock);
 	VM_BUG_ON(!resv->region_cache_count);
-	resv->adds_in_progress--;
+	resv->adds_in_progress -= regions_needed;
 	spin_unlock(&resv->lock);
 }
 
@@ -1904,6 +2007,7 @@ static long __vma_reservation_common(str
 	struct resv_map *resv;
 	pgoff_t idx;
 	long ret;
+	long dummy_out_regions_needed;
 
 	resv = vma_resv_map(vma);
 	if (!resv)
@@ -1912,20 +2016,29 @@ static long __vma_reservation_common(str
 	idx = vma_hugecache_offset(h, vma, addr);
 	switch (mode) {
 	case VMA_NEEDS_RESV:
-		ret = region_chg(resv, idx, idx + 1);
+		ret = region_chg(resv, idx, idx + 1, &dummy_out_regions_needed);
+		/* We assume that vma_reservation_* routines always operate on
+		 * 1 page, and that adding to resv map a 1 page entry can only
+		 * ever require 1 region.
+		 */
+		VM_BUG_ON(dummy_out_regions_needed != 1);
 		break;
 	case VMA_COMMIT_RESV:
-		ret = region_add(resv, idx, idx + 1);
+		ret = region_add(resv, idx, idx + 1, 1);
+		/* region_add calls of range 1 should never fail. */
+		VM_BUG_ON(ret < 0);
 		break;
 	case VMA_END_RESV:
-		region_abort(resv, idx, idx + 1);
+		region_abort(resv, idx, idx + 1, 1);
 		ret = 0;
 		break;
 	case VMA_ADD_RESV:
-		if (vma->vm_flags & VM_MAYSHARE)
-			ret = region_add(resv, idx, idx + 1);
-		else {
-			region_abort(resv, idx, idx + 1);
+		if (vma->vm_flags & VM_MAYSHARE) {
+			ret = region_add(resv, idx, idx + 1, 1);
+			/* region_add calls of range 1 should never fail. */
+			VM_BUG_ON(ret < 0);
+		} else {
+			region_abort(resv, idx, idx + 1, 1);
 			ret = region_del(resv, idx, idx + 1);
 		}
 		break;
@@ -4577,12 +4690,12 @@ int hugetlb_reserve_pages(struct inode *
 					struct vm_area_struct *vma,
 					vm_flags_t vm_flags)
 {
-	long ret, chg;
+	long ret, chg, add = -1;
 	struct hstate *h = hstate_inode(inode);
 	struct hugepage_subpool *spool = subpool_inode(inode);
 	struct resv_map *resv_map;
 	struct hugetlb_cgroup *h_cg;
-	long gbl_reserve;
+	long gbl_reserve, regions_needed = 0;
 
 	/* This should never happen */
 	if (from > to) {
@@ -4612,7 +4725,7 @@ int hugetlb_reserve_pages(struct inode *
 		 */
 		resv_map = inode_resv_map(inode);
 
-		chg = region_chg(resv_map, from, to);
+		chg = region_chg(resv_map, from, to, &regions_needed);
 
 	} else {
 		/* Private mapping. */
@@ -4678,9 +4791,14 @@ int hugetlb_reserve_pages(struct inode *
 	 * else has to be done for private mappings here
 	 */
 	if (!vma || vma->vm_flags & VM_MAYSHARE) {
-		long add = region_add(resv_map, from, to);
+		add = region_add(resv_map, from, to, regions_needed);
 
-		if (unlikely(chg > add)) {
+		if (unlikely(add < 0)) {
+			hugetlb_acct_memory(h, -gbl_reserve);
+			/* put back original number of pages, chg */
+			(void)hugepage_subpool_put_pages(spool, chg);
+			goto out_err;
+		} else if (unlikely(chg > add)) {
 			/*
 			 * pages in this range were added to the reserve
 			 * map between region_chg and region_add.  This
@@ -4698,9 +4816,11 @@ int hugetlb_reserve_pages(struct inode *
 	return 0;
 out_err:
 	if (!vma || vma->vm_flags & VM_MAYSHARE)
-		/* Don't call region_abort if region_chg failed */
-		if (chg >= 0)
-			region_abort(resv_map, from, to);
+		/* Only call region_abort if the region_chg succeeded but the
+		 * region_add failed or didn't run.
+		 */
+		if (chg >= 0 && add < 0)
+			region_abort(resv_map, from, to, regions_needed);
 	if (vma && is_vma_resv_set(vma, HPAGE_RESV_OWNER))
 		kref_put(&resv_map->refs, resv_map_release);
 	return ret;
_

Patches currently in -mm which might be from almasrymina@google.com are

hugetlb_cgroup-add-hugetlb_cgroup-reservation-counter.patch
hugetlb_cgroup-add-interface-for-charge-uncharge-hugetlb-reservations.patch
hugetlb_cgroup-add-reservation-accounting-for-private-mappings.patch
hugetlb-disable-region_add-file_region-coalescing.patch
hugetlb_cgroup-add-accounting-for-shared-mappings.patch
hugetlb_cgroup-support-noreserve-mappings.patch
hugetlb-support-file_region-coalescing-again.patch
hugetlb_cgroup-add-hugetlb_cgroup-reservation-tests.patch
hugetlb_cgroup-add-hugetlb_cgroup-reservation-docs.patch

  parent reply	other threads:[~2020-02-11 23:19 UTC|newest]

Thread overview: 244+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-04  1:33 incoming Andrew Morton
2020-02-04  1:33 ` [patch 01/67] ocfs2: fix oops when writing cloned file Andrew Morton
2020-02-04  1:33 ` [patch 02/67] mm/page_alloc.c: fix uninitialized memmaps on a partially populated last section Andrew Morton
2020-02-04  1:33 ` [patch 03/67] fs/proc/page.c: allow inspection of last section and fix end detection Andrew Morton
2020-02-04  1:33 ` [patch 04/67] mm/page_alloc.c: initialize memmap of unavailable memory directly Andrew Morton
2020-02-04  1:33 ` [patch 05/67] mm/page_alloc: fix and rework pfn handling in memmap_init_zone() Andrew Morton
2020-02-04  1:34 ` [patch 06/67] mm: factor out next_present_section_nr() Andrew Morton
     [not found]   ` <CAHk-=widODg0oV6uNrpTvA3tFVN706o-nrWC5XBL9RZ4y3RFcg@mail.gmail.com>
2020-02-04  4:29     ` Andrew Morton
2020-02-04  1:34 ` [patch 07/67] mm/memmap_init: update variable name in memmap_init_zone Andrew Morton
2020-02-04  1:34 ` [patch 08/67] mm/memory_hotplug: poison memmap in remove_pfn_range_from_zone() Andrew Morton
2020-02-04  1:34 ` [patch 09/67] mm/memory_hotplug: we always have a zone in find_(smallest|biggest)_section_pfn Andrew Morton
2020-02-04  1:34 ` [patch 10/67] mm/memory_hotplug: don't check for "all holes" in shrink_zone_span() Andrew Morton
2020-02-04  1:34 ` [patch 11/67] mm/memory_hotplug: drop local variables " Andrew Morton
2020-02-04  1:34 ` [patch 12/67] mm/memory_hotplug: cleanup __remove_pages() Andrew Morton
2020-02-04  1:34 ` [patch 13/67] mm/memory_hotplug: drop valid_start/valid_end from test_pages_in_a_zone() Andrew Morton
2020-02-04  1:34 ` [patch 14/67] smp_mb__{before,after}_atomic(): update Documentation Andrew Morton
2020-02-04  1:34 ` [patch 15/67] ipc/mqueue.c: remove duplicated code Andrew Morton
2020-02-04  1:34 ` [patch 16/67] ipc/mqueue.c: update/document memory barriers Andrew Morton
2020-02-04  1:34 ` [patch 17/67] ipc/msg.c: update and document " Andrew Morton
2020-02-04  1:34 ` [patch 18/67] ipc/sem.c: document and update " Andrew Morton
2020-02-04  1:34 ` [patch 19/67] ipc/msg.c: consolidate all xxxctl_down() functions Andrew Morton
2020-02-04  1:34 ` [patch 20/67] drivers/block/null_blk_main.c: fix layout Andrew Morton
2020-02-04  1:34 ` [patch 21/67] drivers/block/null_blk_main.c: fix uninitialized var warnings Andrew Morton
2020-02-04  1:34 ` [patch 22/67] pinctrl: fix pxa2xx.c build warnings Andrew Morton
2020-02-04  1:34 ` [patch 23/67] mm: remove __krealloc Andrew Morton
2020-02-04  1:35 ` [patch 24/67] mm: add generic p?d_leaf() macros Andrew Morton
2020-02-04  1:35 ` [patch 25/67] arc: mm: add p?d_leaf() definitions Andrew Morton
2020-02-04  1:35 ` [patch 26/67] arm: " Andrew Morton
2020-02-04  1:35 ` [patch 27/67] arm64: " Andrew Morton
2020-02-04  1:35 ` [patch 28/67] mips: " Andrew Morton
2020-02-04  1:35 ` [patch 29/67] powerpc: " Andrew Morton
2020-02-04  1:35 ` [patch 30/67] riscv: " Andrew Morton
2020-02-04  1:35 ` [patch 31/67] s390: " Andrew Morton
2020-02-04  1:35 ` [patch 32/67] sparc: " Andrew Morton
2020-02-04  1:35 ` [patch 33/67] x86: " Andrew Morton
2020-02-04  1:35 ` [patch 34/67] mm: pagewalk: add p4d_entry() and pgd_entry() Andrew Morton
2020-02-04  1:35 ` [patch 35/67] mm: pagewalk: allow walking without vma Andrew Morton
2020-02-04  1:35 ` [patch 36/67] mm: pagewalk: don't lock PTEs for walk_page_range_novma() Andrew Morton
2020-02-04  1:35 ` [patch 37/67] mm: pagewalk: fix termination condition in walk_pte_range() Andrew Morton
2020-02-04  1:36 ` [patch 38/67] mm: pagewalk: add 'depth' parameter to pte_hole Andrew Morton
2020-02-04  1:36 ` [patch 39/67] x86: mm: point to struct seq_file from struct pg_state Andrew Morton
2020-02-04  1:36 ` [patch 40/67] x86: mm+efi: convert ptdump_walk_pgd_level() to take a mm_struct Andrew Morton
2020-02-04  1:36 ` [patch 41/67] x86: mm: convert ptdump_walk_pgd_level_debugfs() to take an mm_struct Andrew Morton
2020-02-04  1:36 ` [patch 42/67] mm: add generic ptdump Andrew Morton
2020-02-04  1:36 ` [patch 43/67] x86: mm: convert dump_pagetables to use walk_page_range Andrew Morton
2020-02-04  1:36 ` [patch 44/67] arm64: mm: convert mm/dump.c to use walk_page_range() Andrew Morton
2020-02-04  1:36 ` [patch 45/67] arm64: mm: display non-present entries in ptdump Andrew Morton
2020-02-04  1:36 ` [patch 46/67] mm: ptdump: reduce level numbers by 1 in note_page() Andrew Morton
2020-02-04  1:36 ` [patch 47/67] x86: mm: avoid allocating struct mm_struct on the stack Andrew Morton
2020-02-04  1:36 ` [patch 48/67] powerpc/mmu_gather: enable RCU_TABLE_FREE even for !SMP case Andrew Morton
2020-02-04  1:36 ` [patch 49/67] mm/mmu_gather: invalidate TLB correctly on batch allocation failure and flush Andrew Morton
2020-02-04  1:36 ` [patch 50/67] asm-generic/tlb: avoid potential double flush Andrew Morton
2020-02-04  1:36 ` [patch 51/67] asm-gemeric/tlb: remove stray function declarations Andrew Morton
2020-02-04  1:36 ` [patch 52/67] asm-generic/tlb: add missing CONFIG symbol Andrew Morton
2020-02-04  1:37 ` [patch 53/67] asm-generic/tlb: rename HAVE_RCU_TABLE_FREE Andrew Morton
2020-02-04  1:37 ` [patch 54/67] asm-generic/tlb: rename HAVE_MMU_GATHER_PAGE_SIZE Andrew Morton
2020-02-04  1:37 ` [patch 55/67] asm-generic/tlb: rename HAVE_MMU_GATHER_NO_GATHER Andrew Morton
2020-02-04  1:37 ` [patch 56/67] asm-generic/tlb: provide MMU_GATHER_TABLE_FREE Andrew Morton
2020-02-04  1:37 ` [patch 57/67] proc: decouple proc from VFS with "struct proc_ops" Andrew Morton
2020-02-04  1:37 ` [patch 58/67] proc: convert everything to " Andrew Morton
2020-02-04  1:37 ` [patch 59/67] lib/string: add strnchrnul() Andrew Morton
2020-02-04  1:37 ` [patch 60/67] bitops: more BITS_TO_* macros Andrew Morton
2020-02-04  1:37 ` [patch 61/67] lib: add test for bitmap_parse() Andrew Morton
2020-02-04  1:37 ` [patch 62/67] lib: make bitmap_parse_user a wrapper on bitmap_parse Andrew Morton
2020-02-04  1:37 ` [patch 63/67] lib: rework bitmap_parse() Andrew Morton
2020-02-04  1:37 ` [patch 64/67] lib: new testcases for bitmap_parse{_user} Andrew Morton
2020-02-04  1:37 ` [patch 65/67] include/linux/cpumask.h: don't calculate length of the input string Andrew Morton
2020-02-04  1:37 ` [patch 66/67] treewide: remove redundant IS_ERR() before error code check Andrew Morton
2020-02-04  1:37 ` [patch 67/67] ARM: dma-api: fix max_pfn off-by-one error in __dma_supported() Andrew Morton
2020-02-04  1:48 ` [merged] mips-kdb-remove-old-workaround-for-backtracing-on-other-cpus.patch removed from -mm tree Andrew Morton
2020-02-04  1:48 ` [merged] kdb-kdb_current_regs-should-be-private.patch " Andrew Morton
2020-02-04  1:48 ` [merged] kdb-kdb_current_task-shouldnt-be-exported.patch " Andrew Morton
2020-02-04  1:48 ` [merged] kdb-gid-rid-of-implicit-setting-of-the-current-task-regs.patch " Andrew Morton
2020-02-04  1:48 ` [merged] kdb-get-rid-of-confusing-diag-msg-from-rd-if-current-task-has-no-regs.patch " Andrew Morton
2020-02-04  1:49 ` [obsolete] linux-next-git-rejects.patch " Andrew Morton
     [not found] ` <CAHk-=whog86e4fRY_sxHqAos6spwAi_4aFF49S7h5C4XAZM2qw@mail.gmail.com>
2020-02-04  2:46   ` incoming Andrew Morton
2020-02-10  0:55 ` + mm-dont-prepare-anon_vma-if-vma-has-vm_wipeonfork.patch added to -mm tree Andrew Morton
2020-02-10  0:55 ` + revert-mm-rmapc-reuse-mergeable-anon_vma-as-parent-when-fork.patch " Andrew Morton
2020-02-10  0:55 ` + mm-set-vm_next-and-vm_prev-to-null-in-vm_area_dup.patch " Andrew Morton
2020-02-10  1:03 ` + mm-list_lru-fix-a-data-race-in-list_lru_count_one.patch " Andrew Morton
2020-02-10  1:05 ` + mm-frontswap-mark-various-intentional-data-races.patch " Andrew Morton
2020-02-10  1:22 ` + mm-add-mremap_dontunmap-to-mremap.patch " Andrew Morton
2020-02-10  1:35 ` + mm-sparsemem-get-address-to-page-struct-instead-of-address-to-pfn.patch " Andrew Morton
2020-02-10  1:48 ` + mm-swapfile-fix-and-annotate-various-data-races.patch " Andrew Morton
2020-02-10  2:01 ` + mm-page_io-mark-various-intentional-data-races.patch " Andrew Morton
2020-02-10  2:01 ` + mm-swap_state-mark-various-intentional-data-races.patch " Andrew Morton
2020-02-10  4:00 ` + linux-pipe_fs_ih-fix-kernel-doc-warnings-after-wait-was-split.patch " Andrew Morton
2020-02-10  4:16 ` + mm-swap-move-inode_lock-out-of-claim_swapfile.patch " Andrew Morton
2020-02-10  4:20 ` + selftests-vm-add-missed-tests-in-run_vmtests.patch " Andrew Morton
2020-02-10  4:21 ` + mm-memcg-fix-build-error-around-the-usage-of-kmem_caches.patch " Andrew Morton
2020-02-10  4:23 ` + mm-memcontrol-fix-a-data-race-in-scan-count.patch " Andrew Morton
2020-02-10  4:29 ` + get_maintainer-remove-uses-of-p-for-maintainer-name.patch " Andrew Morton
2020-02-10  4:37 ` + scripts-get_maintainerpl-deprioritize-old-fixes-addresses.patch " Andrew Morton
2020-02-11  5:33 ` + mm-vmpressure-dont-need-call-kfree-if-kstrndup-fails.patch " Andrew Morton
2020-02-11  5:34 ` + mm-vmpressure-use-mem_cgroup_is_root-api.patch " Andrew Morton
2020-02-11  5:39 ` + mm-filemap-fix-a-data-race-in-filemap_fault.patch " Andrew Morton
2020-02-11  5:50 ` + mm-gup-split-get_user_pages_remote-into-two-routines.patch " Andrew Morton
2020-02-11  5:50 ` + mm-gup-pass-a-flags-arg-to-__gup_device_-functions.patch " Andrew Morton
2020-02-11  5:50 ` + mm-introduce-page_ref_sub_return.patch " Andrew Morton
2020-02-11  5:50 ` + mm-gup-pass-gup-flags-to-two-more-routines.patch " Andrew Morton
2020-02-11  5:50 ` + mm-gup-require-foll_get-for-get_user_pages_fast.patch " Andrew Morton
2020-02-11  5:50 ` + mm-gup-track-foll_pin-pages.patch " Andrew Morton
2020-02-11  5:50 ` + mm-gup-page-hpage_pinned_refcount-exact-pin-counts-for-huge-pages.patch " Andrew Morton
2020-02-11  5:50 ` + mm-gup-proc-vmstat-pin_user_pages-foll_pin-reporting.patch " Andrew Morton
2020-02-11  5:50 ` + mm-gup_benchmark-support-pin_user_pages-and-related-calls.patch " Andrew Morton
2020-02-11  5:50 ` + selftests-vm-run_vmtests-invoke-gup_benchmark-with-basic-foll_pin-coverage.patch " Andrew Morton
2020-02-11  5:50 ` + mm-improve-dump_page-for-compound-pages.patch " Andrew Morton
2020-02-11  5:51 ` + mm-dump_page-additional-diagnostics-for-huge-pinned-pages.patch " Andrew Morton
2020-02-11  6:05 ` + mm-mapping_dirty_helpers-update-huge-page-table-entry-callbacks.patch " Andrew Morton
2020-02-11  6:06 ` + zswap-allow-setting-default-status-compressor-and-allocator-in-kconfig.patch " Andrew Morton
2020-02-11 23:19 ` + hugetlb_cgroup-add-hugetlb_cgroup-reservation-counter.patch " Andrew Morton
2020-02-11 23:19 ` + hugetlb_cgroup-add-interface-for-charge-uncharge-hugetlb-reservations.patch " Andrew Morton
2020-02-11 23:19 ` + hugetlb_cgroup-add-reservation-accounting-for-private-mappings.patch " Andrew Morton
2020-02-11 23:19 ` Andrew Morton [this message]
2020-02-11 23:19 ` + hugetlb_cgroup-add-accounting-for-shared-mappings.patch " Andrew Morton
2020-02-11 23:19 ` + hugetlb_cgroup-support-noreserve-mappings.patch " Andrew Morton
2020-02-11 23:19 ` + hugetlb-support-file_region-coalescing-again.patch " Andrew Morton
2020-02-11 23:19 ` + hugetlb_cgroup-add-hugetlb_cgroup-reservation-tests.patch " Andrew Morton
2020-02-11 23:19 ` + hugetlb_cgroup-add-hugetlb_cgroup-reservation-docs.patch " Andrew Morton
2020-02-11 23:21 ` + lib-bch-replace-zero-length-array-with-flexible-array-member.patch " Andrew Morton
2020-02-11 23:21 ` + lib-objagg-replace-zero-length-arrays-with-flexible-array-member.patch " Andrew Morton
2020-02-11 23:21 ` + lib-ts_bm-replace-zero-length-array-with-flexible-array-member.patch " Andrew Morton
2020-02-11 23:21 ` + lib-ts_fsm-replace-zero-length-array-with-flexible-array-member.patch " Andrew Morton
2020-02-11 23:21 ` + lib-ts_kmp-replace-zero-length-array-with-flexible-array-member.patch " Andrew Morton
2020-02-11 23:23 ` + mm-rmap-annotate-a-data-race-at-tlb_flush_batched.patch " Andrew Morton
2020-02-11 23:26 ` + mm-mempool-fix-a-data-race-in-mempool_free.patch " Andrew Morton
2020-02-11 23:53 ` + mm-kmemleak-annotate-a-data-race-in-checksum.patch " Andrew Morton
2020-02-12  0:19 ` + mm-adjust-shuffle-code-to-allow-for-future-coalescing.patch " Andrew Morton
2020-02-12  0:19 ` + mm-use-zone-and-order-instead-of-free-area-in-free_list-manipulators.patch " Andrew Morton
2020-02-12  0:19 ` + mm-add-function-__putback_isolated_page.patch " Andrew Morton
2020-02-12  0:19 ` + mm-introduce-reported-pages.patch " Andrew Morton
2020-02-12  0:19 ` + virtio-balloon-pull-page-poisoning-config-out-of-free-page-hinting.patch " Andrew Morton
2020-02-12  0:19 ` + virtio-balloon-add-support-for-providing-free-page-reports-to-host.patch " Andrew Morton
2020-02-12  0:20 ` + mm-page_reporting-rotate-reported-pages-to-the-tail-of-the-list.patch " Andrew Morton
2020-02-12  0:20 ` + mm-page_reporting-add-budget-limit-on-how-many-pages-can-be-reported-per-pass.patch " Andrew Morton
2020-02-12  0:20 ` + mm-page_reporting-add-free-page-reporting-documentation.patch " Andrew Morton
2020-02-12  0:30 ` + mm-page_counter-fix-various-data-races.patch " Andrew Morton
2020-02-12  0:33 ` + memcg-lost-css_put-in-memcg_expand_shrinker_maps.patch " Andrew Morton
2020-02-12 21:16 ` + mm-page_counter-fix-various-data-races-at-memsw.patch " Andrew Morton
2020-02-12 21:16 ` [to-be-updated] mm-page_counter-fix-various-data-races.patch removed from " Andrew Morton
2020-02-12 21:18 ` + mm-swapfilec-fix-comments-for-swapcache_prepare.patch added to " Andrew Morton
2020-02-12 21:22 ` + mm-util-annotate-an-data-race-at-vm_committed_as.patch " Andrew Morton
2020-02-12 22:08 ` + asm-generic-fix-unistd_32h-generation-format.patch " Andrew Morton
2020-02-12 22:26 ` + mm-dont-bother-dropping-mmap_sem-for-zero-size-readahead.patch " Andrew Morton
2020-02-12 22:34 ` + lib-scatterlist-fix-sg_copy_buffer-kerneldoc.patch " Andrew Morton
2020-02-12 22:59 ` + mm-allocate-shrinker_map-on-appropriate-numa-node.patch " Andrew Morton
2020-02-12 23:05 ` + init-cleanup-anon_inodes-and-old-io-schedulers-options.patch " Andrew Morton
2020-02-12 23:10 ` + checkpatch-check-spdx-tags-in-yaml-files.patch " Andrew Morton
2020-02-13  2:16 ` + drivers-base-memoryc-indicate-all-memory-blocks-as-removable.patch " Andrew Morton
2020-02-13  2:56 ` + mm-refactor-insert_page-to-prepare-for-batched-lock-insert.patch " Andrew Morton
2020-02-13  2:56 ` + mm-add-vm_insert_pages.patch " Andrew Morton
2020-02-13  2:57 ` + mm-add-vm_insert_pages-fix.patch " Andrew Morton
2020-02-13  2:57 ` + net-zerocopy-use-vm_insert_pages-for-tcp-rcv-zerocopy.patch " Andrew Morton
2020-02-13  2:57 ` + net-zerocopy-use-vm_insert_pages-for-tcp-rcv-zerocopy-fix.patch " Andrew Morton
2020-02-14  2:50 ` + mm-add-vm_insert_pages-2.patch " Andrew Morton
2020-02-14  2:52 ` + mm-migratec-no-need-to-check-for-i-start-in-do_pages_move.patch " Andrew Morton
2020-02-14  2:52 ` + mm-migratec-wrap-do_move_pages_to_node-and-store_status.patch " Andrew Morton
2020-02-14  2:52 ` + mm-migratec-check-pagelist-in-move_pages_and_store_status.patch " Andrew Morton
2020-02-14  2:52 ` + mm-migratec-unify-not-queued-for-migration-handling-in-do_pages_move.patch " Andrew Morton
2020-02-14  2:57 ` + mm-migratec-migrate-pg_readahead-flag.patch " Andrew Morton
2020-02-14  2:57 ` + mm-migratec-migrate-pg_readahead-flag-fix.patch " Andrew Morton
2020-02-14  2:58 ` + include-remove-highmemh-from-pagemaph.patch " Andrew Morton
2020-02-14  3:06 ` + mm-kmemleak-annotate-various-data-races-obj-ptr.patch " Andrew Morton
2020-02-14  3:06 ` [to-be-updated] mm-kmemleak-annotate-a-data-race-in-checksum.patch removed from " Andrew Morton
2020-02-14  3:26 ` + mm-swapfile-fix-and-annotate-various-data-races-v2.patch added to " Andrew Morton
2020-02-14  3:27 ` + mm-page_io-mark-various-intentional-data-races-v2.patch " Andrew Morton
2020-02-14  5:10 ` + checkpatch-support-base-commit-format.patch " Andrew Morton
2020-02-14  5:11 ` + checkpatch-prefer-fallthrough-over-fallthrough-comments.patch " Andrew Morton
2020-02-14  5:12 ` + uapi-fix-userspace-breakage-use-__bits_per_long-for-swap.patch " Andrew Morton
2020-02-14  5:15 ` + lib-string-update-match_string-doc-strings-with-correct-behavior.patch " Andrew Morton
2020-02-14  5:16 ` + mm-vmscan-replace-open-codings-to-numa_no_node.patch " Andrew Morton
2020-02-14  5:19 ` + mm-mempolicy-support-mpol_mf_strict-for-huge-page-mapping.patch " Andrew Morton
2020-02-14  5:22 ` + mm-vmscan-dont-round-up-scan-size-for-online-memory-cgroup.patch " Andrew Morton
2020-02-14  5:26 ` + mm-mempolicy-use-vm_bug_on_vma-in-queue_pages_test_walk.patch " Andrew Morton
2020-02-14  5:30 ` + drivers-base-memoryc-drop-section_count.patch " Andrew Morton
2020-02-14  5:31 ` + drivers-base-memoryc-drop-pages_correctly_probed.patch " Andrew Morton
2020-02-14  5:31 ` + mm-page_extc-drop-pfn_present-check-when-onlining.patch " Andrew Morton
2020-02-14  5:41 ` [failures] include-remove-highmemh-from-pagemaph.patch removed from " Andrew Morton
2020-02-14  6:26 ` mmotm 2020-02-13-22-26 uploaded Andrew Morton
2020-02-24  2:45 ` + mm-debug-add-tests-validating-architecture-page-table-helpers.patch added to -mm tree Andrew Morton
2020-02-24  3:20 ` + proc-faster-open-read-close-with-permanent-files.patch " Andrew Morton
2020-02-24  3:24 ` + proc-faster-open-read-close-with-permanent-files-checkpatch-fixes.patch " Andrew Morton
2020-02-24  3:25 ` + psi-move-pf_memstall-into-psi-specific-psi_flags.patch " Andrew Morton
2020-02-24  3:29 ` + mm-hugetlb-fix-a-addressing-exception-caused-by-huge_pte_offset.patch " Andrew Morton
2020-02-24  3:31 ` + ocfs2-there-is-no-need-to-log-twice-in-several-functions.patch " Andrew Morton
2020-02-24  3:31 ` + ocfs2-correct-annotation-from-l_next_rec-to-l_next_free_rec.patch " Andrew Morton
2020-02-24  3:46 ` + hugetlb-support-file_region-coalescing-again-fix-2.patch " Andrew Morton
2020-02-24  3:53 ` + fat-fix-uninit-memory-access-for-partial-initialized-inode.patch " Andrew Morton
2020-02-24  4:08 ` + mm-add-mremap_dontunmap-to-mremap-v7.patch " Andrew Morton
2020-02-24  4:08 ` + selftest-add-mremap_dontunmap-selftest-v7.patch " Andrew Morton
2020-02-24  4:08 ` + selftest-add-mremap_dontunmap-selftest-v7-checkpatch-fixes.patch " Andrew Morton
2020-02-24  4:10 ` + percpu_counter-fix-a-data-race-at-vm_committed_as.patch " Andrew Morton
2020-02-24  4:10 ` + lib-test_lockup-fix-spelling-mistake-iteraions-iterations.patch " Andrew Morton
2020-02-24 21:40 ` + mm-swapfile-fix-data-races-in-try_to_unuse.patch " Andrew Morton
2020-02-24 21:45 ` [nacked] psi-move-pf_memstall-into-psi-specific-psi_flags.patch removed from " Andrew Morton
2020-02-24 21:57 ` + mm-z3fold-do-not-include-rwlockh-directly.patch added to " Andrew Morton
2020-02-24 22:04 ` + mm-hotplug-fix-page-online-with-debug_pagealloc-compiled-but-not-enabled.patch " Andrew Morton
2020-02-24 22:13 ` + mm-vma-add-missing-vma-flag-readable-name-for-vm_sync.patch " Andrew Morton
2020-02-24 22:13 ` + mm-vma-make-vma_is_accessible-available-for-general-use.patch " Andrew Morton
2020-02-24 22:13 ` + mm-vma-replace-all-remaining-open-encodings-with-is_vm_hugetlb_page.patch " Andrew Morton
2020-02-24 22:13 ` + mm-vma-replace-all-remaining-open-encodings-with-vma_is_anonymous.patch " Andrew Morton
2020-02-24 22:14 ` + mm-vma-append-unlikely-while-testing-vma-access-permissions.patch " Andrew Morton
2020-02-24 22:30 ` + samples-hw_breakpoint-drop-hw_breakpoint_r-when-reporting-writes.patch " Andrew Morton
2020-02-24 22:31 ` + samples-hw_breakpoint-drop-use-of-kallsyms_lookup_name.patch " Andrew Morton
2020-02-24 22:31 ` + kallsyms-unexport-kallsyms_lookup_name-and-kallsyms_on_each_symbol.patch " Andrew Morton
2020-02-24 22:55 ` + loop-use-worker-per-cgroup-instead-of-kworker.patch " Andrew Morton
2020-02-24 22:55 ` + mm-charge-active-memcg-when-no-mm-is-set.patch " Andrew Morton
2020-02-24 22:55 ` + loop-charge-i-o-to-mem-and-blk-cg.patch " Andrew Morton
2020-02-24 23:33 ` + mm-mempolicy-checking-hugepage-migration-is-supported-by-arch-in-vma_migratable.patch " Andrew Morton
2020-02-24 23:39 ` + checkpatch-fix-minor-typo-and-mixed-spacetab-in-indentation.patch " Andrew Morton
2020-02-24 23:40 ` + checkpatch-fix-multiple-const-types.patch " Andrew Morton
2020-02-24 23:40 ` + checkpatch-add-command-line-option-for-tab-size.patch " Andrew Morton
2020-02-24 23:43 ` + lib-test_bitmap-make-use-of-exp2_in_bits.patch " Andrew Morton
2020-02-24 23:44 ` + ocfs2-remove-useless-err.patch " Andrew Morton
2020-02-25  0:26 ` + arch-kconfig-update-have_reliable_stacktrace-description.patch " Andrew Morton
2020-02-25  0:32 ` + mm-memcg-slab-introduce-mem_cgroup_from_obj.patch " Andrew Morton
2020-02-25  0:47 ` + mm-kmem-cleanup-__memcg_kmem_charge_memcg-arguments.patch " Andrew Morton
2020-02-25  0:48 ` + mm-kmem-cleanup-memcg_kmem_uncharge_memcg-arguments.patch " Andrew Morton
2020-02-25  0:48 ` + mm-kmem-rename-memcg_kmem_uncharge-into-memcg_kmem_uncharge_page.patch " Andrew Morton
2020-02-25  0:48 ` + mm-kmem-switch-to-nr_pages-in-__memcg_kmem_charge_memcg.patch " Andrew Morton
2020-02-25  0:48 ` + mm-memcg-slab-cache-page-number-in-memcg_uncharge_slab.patch " Andrew Morton
2020-02-25  0:48 ` + mm-kmem-rename-__memcg_kmem_uncharge_memcg-to-__memcg_kmem_uncharge.patch " Andrew Morton
2020-02-25  2:29 ` + mm-memcg-slab-introduce-mem_cgroup_from_obj-v2.patch " Andrew Morton
2020-02-25  2:36 ` + ocfs2-add-missing-annotations-for-ocfs2_refcount_cache_lock-and-ocfs2_refcount_cache_unlock.patch " Andrew Morton
2020-02-25  3:53 ` mmotm 2020-02-24-19-53 uploaded Andrew Morton
2020-02-26  1:06 ` + checkpatch-improve-gerrit-change-id-test.patch added to -mm tree Andrew Morton
2020-02-26  1:55 ` + dma-buf-free-dmabuf-name-in-dma_buf_release.patch " Andrew Morton
     [not found]   ` <CAO_48GFr9-aY4=kRqWB=UkEzPj5fQDip+G1tNZMsT0XoQpBC7Q@mail.gmail.com>
     [not found]     ` <CAKMK7uGvixQ2xoQMt3pvt0OpNXDjDGTvSWsaAppsKrmO_EP3Kg@mail.gmail.com>
2020-02-27  4:20       ` Andrew Morton
2020-02-26  3:42 ` + lib-rbtree-fix-coding-style-of-assignments.patch " Andrew Morton
2020-02-26  3:56 ` + seq_read-info-message-about-buggy-next-functions.patch " Andrew Morton
     [not found]   ` <1583173259.7365.142.camel@lca.pw>
     [not found]     ` <1583177508.7365.144.camel@lca.pw>
2020-03-02 20:42       ` Andrew Morton
2020-02-26  3:56 ` + pstore_ftrace_seq_next-should-increase-position-index.patch " Andrew Morton
     [not found]   ` <07f968e6-02cd-de2a-e868-787e4bedd346@virtuozzo.com>
2020-02-27  4:26     ` Andrew Morton
2020-02-26  3:56 ` + gcov_seq_next-should-increase-position-index.patch " Andrew Morton
2020-02-26  3:56 ` + sysvipc_find_ipc-should-increase-position-index.patch " Andrew Morton
2020-02-27  1:19 ` + mm-bring-sparc-pte_index-semantics-inline-with-other-platforms.patch " Andrew Morton
2020-02-27  1:37 ` + mm-vmscan-fix-data-races-at-kswapd_classzone_idx.patch " Andrew Morton
2020-02-27  1:49 ` + fs-epoll-make-nesting-accounting-safe-for-rt-kernel.patch " Andrew Morton
2020-02-27  3:50 ` + lib-optimize-cpumask_local_spread.patch " Andrew Morton
2020-02-27  4:04 ` + mm-debug-add-tests-validating-architecture-page-table-helpers-fix.patch " Andrew Morton
2020-02-27  4:11 ` + gcov-gcc_4_7-replace-zero-length-array-with-flexible-array-member.patch " Andrew Morton
2020-02-27  4:42 ` + mm-debug-add-tests-validating-architecture-page-table-helpers-fix-2.patch " Andrew Morton
2020-02-27  4:42 ` [to-be-updated] mm-debug-add-tests-validating-architecture-page-table-helpers-fix.patch removed from " Andrew Morton
2020-02-27  4:44 ` + huge-tmpfs-try-to-split_huge_page-when-punching-hole.patch added to " Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200211231918.a2V8kHC_j%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=gthelen@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mike.kravetz@oracle.com \
    --cc=mm-commits@vger.kernel.org \
    --cc=rientjes@google.com \
    --cc=sandipan@linux.ibm.com \
    --cc=shakeelb@google.com \
    --cc=shuah@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).