All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] THP MMU gather
@ 2015-11-19 13:00 Andrea Arcangeli
  2015-11-19 13:00 ` [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during " Andrea Arcangeli
  2015-11-19 13:00 ` [PATCH 2/2] mm: thp: put_huge_zero_page() with " Andrea Arcangeli
  0 siblings, 2 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2015-11-19 13:00 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: \"Kirill A. Shutemov\",
	Mel Gorman, Hugh Dickins, Johannes Weiner, Dave Hansen,
	Vlastimil Babka

Hello,

there are two SMP race conditions that needs fixing, but the side
effects are none... it's all theoretical.

I'll be offline until Monday but I wanted to push this out now so it
can be reviewed sooner than later.

Thanks,
Andrea

Andrea Arcangeli (2):
  mm: thp: introduce thp_mmu_gather to pin tail pages during MMU gather
  mm: thp: put_huge_zero_page() with MMU gather

 Documentation/vm/transhuge.txt |  60 ++++++++++++++++++
 include/linux/huge_mm.h        |  85 ++++++++++++++++++++++++++
 include/linux/mm_types.h       |   1 +
 mm/huge_memory.c               |  39 ++++++++++--
 mm/page_alloc.c                |  14 +++++
 mm/swap.c                      | 134 +++++++++++++++++++++++++++++++++++------
 mm/swap_state.c                |  17 ++++--
 7 files changed, 320 insertions(+), 30 deletions(-)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during MMU gather
  2015-11-19 13:00 [PATCH 0/2] THP MMU gather Andrea Arcangeli
@ 2015-11-19 13:00 ` Andrea Arcangeli
  2015-11-20  0:22   ` Andrew Morton
  2015-12-07  9:30   ` Aneesh Kumar K.V
  2015-11-19 13:00 ` [PATCH 2/2] mm: thp: put_huge_zero_page() with " Andrea Arcangeli
  1 sibling, 2 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2015-11-19 13:00 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: \"Kirill A. Shutemov\",
	Mel Gorman, Hugh Dickins, Johannes Weiner, Dave Hansen,
	Vlastimil Babka

This theoretical SMP race condition was found with source review. No
real life app could be affected as the result of freeing memory while
accessing it is either undefined or it's a workload the produces no
information.

For something to go wrong because the SMP race condition triggered,
it'd require a further tiny window within the SMP race condition
window. So nothing bad is happening in practice even if the SMP race
condition triggers. It's still better to apply the fix to have the
math guarantee.

The fix just adds a thp_mmu_gather atomic_t counter to the THP pages,
so split_huge_page can elevate the tail page count accordingly and
leave the tail page freeing task to whoever elevated thp_mmu_gather.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 Documentation/vm/transhuge.txt |  60 +++++++++++++++++++
 include/linux/huge_mm.h        |  72 +++++++++++++++++++++++
 include/linux/mm_types.h       |   1 +
 mm/huge_memory.c               |  33 ++++++++++-
 mm/page_alloc.c                |  14 +++++
 mm/swap.c                      | 130 ++++++++++++++++++++++++++++++++++-------
 mm/swap_state.c                |  17 ++++--
 7 files changed, 300 insertions(+), 27 deletions(-)

diff --git a/Documentation/vm/transhuge.txt b/Documentation/vm/transhuge.txt
index 8a28268..8851d28 100644
--- a/Documentation/vm/transhuge.txt
+++ b/Documentation/vm/transhuge.txt
@@ -395,3 +395,63 @@ tail page refcount. To obtain a head page reliably and to decrease its
 refcount without race conditions, put_page has to serialize against
 __split_huge_page_refcount using a special per-page lock called
 compound_lock.
+
+== THP MMU gather vs split_huge_page ==
+
+After page_remove_rmap() runs (inside a PT/pmd lock protected critical
+section) the page_mapcount() of the transparent hugepage is
+decreased. After the PT/pmd lock is released, the page_count()
+refcount left on the PageTransHuge(page) pins the entire THP page if
+__split_huge_page_refcount() didn't run yet, but it only pins the head
+page if it already run.
+
+If watching the problem purely in terms of refcounts this is correct:
+if put_page() runs on a PageTransHuge() after the PT/pmd lock has been
+dropped, it still won't need to serialize against
+__split_huge_page_refcount() and it will get the refcounting right no
+matter if __split_huge_page_refcount() is running under it or not,
+this is because the head page "page_count" refcount stays local to the
+head page even during __split_huge_page_refcount(). Only tail pages,
+have to go through the trouble of using the compound_lock to serialize
+inside put_page().
+
+However special care is needed if the TLB isn't flushed before
+dropping the PT/pmd lock, because after page_remove_rmap() and
+immediately after the PT/pmd lock is released,
+__split_huge_page_refcount could run and free all the tail pages.
+
+To keep the tail pages temporarily pinned if we didn't flush the TLB
+in the aforementioned case, we use a page[1].thp_mmu_gather atomic
+counter that is increased before releasing the PT/pmd lock (PT/pmd
+lock serializes against __split_huge_page_refcount()) and decreased by
+the actual page freeing while holding the lru_lock (which also
+serializes against __split_huge_page_refcount()). This thp_mmu_gather
+counter has the effect of pinning all tail pages, until after the TLB
+has been flushed. Then just before freeing the page the thp_mmu_gather
+counter is decreased if we detected that __split_huge_page_refcount()
+didn't run yet.
+
+__split_huge_page_refcount() will threat the thp_mmu_gather more or
+less like the mapcount: the thp_mmu_gather taken on the PageTransHuge
+is transferred as an addition to the "page_count" to all the tail
+pages (but not to the mapcount because the tails aren't mapped
+anymore).
+
+If the actual page freeing during MMU gather processing (so always
+happening after the TLB flush) finds that __split_huge_page_refcount()
+already run on the PageTransHuge, before it could atomic decrease the
+thp_mmu_gather for the current MMU gather, it will simply put_page all
+the HPAGE_PMD_NR pages (the head and all the tails). If the
+thp_mmu_gather wouldn't have been increased, the tails wouldn't have
+needed to be freed.
+
+The page[1].thp_mmu_gather field must be initialized whenever a THP is
+first established before dropping the PT/pmd lock, so generally in the
+same critical section with page_add_new_anon_rmap() and on the same
+page passed as argument to page_add_new_anon_rmap(). It can only be
+initialized on PageTransHuge() pages as it is aliased to the
+page[1].index.
+
+The page[1].thp_mmu_gather has to be increased before dropping the
+PT/pmd lock if the critical section run page_remove_rmap() and the TLB
+hasn't been flushed before dropping the PT/pmd lock.
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index ecb080d..0d8ef7d 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -164,6 +164,61 @@ static inline bool is_huge_zero_pmd(pmd_t pmd)
 
 struct page *get_huge_zero_page(void);
 
+static inline bool is_trans_huge_page_release(struct page *page)
+{
+	return (unsigned long) page & 1;
+}
+
+static inline struct page *trans_huge_page_release_decode(struct page *page)
+{
+	return (struct page *) ((unsigned long)page & ~1UL);
+}
+
+static inline struct page *trans_huge_page_release_encode(struct page *page)
+{
+	return (struct page *) ((unsigned long)page | 1UL);
+}
+
+static inline atomic_t *__trans_huge_mmu_gather_count(struct page *page)
+{
+	return &(page + 1)->thp_mmu_gather;
+}
+
+static inline void init_trans_huge_mmu_gather_count(struct page *page)
+{
+	atomic_t *thp_mmu_gather = __trans_huge_mmu_gather_count(page);
+	atomic_set(thp_mmu_gather, 0);
+}
+
+static inline void inc_trans_huge_mmu_gather_count(struct page *page)
+{
+	atomic_t *thp_mmu_gather = __trans_huge_mmu_gather_count(page);
+	VM_BUG_ON(atomic_read(thp_mmu_gather) < 0);
+	atomic_inc(thp_mmu_gather);
+}
+
+static inline void dec_trans_huge_mmu_gather_count(struct page *page)
+{
+	atomic_t *thp_mmu_gather = __trans_huge_mmu_gather_count(page);
+	VM_BUG_ON(atomic_read(thp_mmu_gather) <= 0);
+	atomic_dec(thp_mmu_gather);
+}
+
+static inline int trans_huge_mmu_gather_count(struct page *page)
+{
+	atomic_t *thp_mmu_gather = __trans_huge_mmu_gather_count(page);
+	int ret = atomic_read(thp_mmu_gather);
+	VM_BUG_ON(ret < 0);
+	return ret;
+}
+
+/*
+ * free_trans_huge_page_list() is used to free the pages returned by
+ * trans_huge_page_release() (if still PageTransHuge()) in
+ * release_pages().
+ */
+extern void free_trans_huge_page_list(struct list_head *list);
+
 #else /* CONFIG_TRANSPARENT_HUGEPAGE */
 #define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; })
 #define HPAGE_PMD_MASK ({ BUILD_BUG(); 0; })
@@ -218,6 +273,23 @@ static inline bool is_huge_zero_page(struct page *page)
 	return false;
 }
 
+static inline bool is_trans_huge_page_release(struct page *page)
+{
+	return false;
+}
+
+static inline struct page *trans_huge_page_release_encode(struct page *page)
+{
+	return page;
+}
+
+static inline struct page *trans_huge_page_release_decode(struct page *page)
+{
+	return page;
+}
+
+extern void dec_trans_huge_mmu_gather_count(struct page *page);
+
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 #endif /* _LINUX_HUGE_MM_H */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index f8d1492..baedb35 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -61,6 +61,7 @@ struct page {
 		union {
 			pgoff_t index;		/* Our offset within mapping. */
 			void *freelist;		/* sl[aou]b first free object */
+			atomic_t thp_mmu_gather; /* in first tailpage of THP */
 		};
 
 		union {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 4ca884e..e85027c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -768,6 +768,7 @@ static int __do_huge_pmd_anonymous_page(struct mm_struct *mm,
 			return ret;
 		}
 
+		init_trans_huge_mmu_gather_count(page);
 		entry = mk_huge_pmd(page, vma->vm_page_prot);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
 		page_add_new_anon_rmap(page, vma, haddr);
@@ -1241,6 +1242,7 @@ alloc:
 		goto out_mn;
 	} else {
 		pmd_t entry;
+		init_trans_huge_mmu_gather_count(new_page);
 		entry = mk_huge_pmd(new_page, vma->vm_page_prot);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
 		pmdp_huge_clear_flush_notify(vma, haddr, pmd);
@@ -1488,8 +1490,20 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		VM_BUG_ON_PAGE(!PageHead(page), page);
 		pte_free(tlb->mm, pgtable_trans_huge_withdraw(tlb->mm, pmd));
 		atomic_long_dec(&tlb->mm->nr_ptes);
+		/*
+		 * page_remove_rmap() already decreased the
+		 * page_mapcount(), so tail pages can be instantly
+		 * freed after we release the pmd lock. Increase the
+		 * mmu_gather_count to prevent the tail pages to be
+		 * freed, even if the THP page get splitted.
+		 * __split_huge_page_refcount() will then see that
+		 * we're in the middle of a mmu gather and it'll add
+		 * the compound mmu_gather_count to every tail
+		 * page page_count().
+		 */
+		inc_trans_huge_mmu_gather_count(page);
 		spin_unlock(ptl);
-		tlb_remove_page(tlb, page);
+		tlb_remove_page(tlb, trans_huge_page_release_encode(page));
 	}
 	return 1;
 }
@@ -1709,11 +1723,22 @@ static void __split_huge_page_refcount(struct page *page,
 	struct zone *zone = page_zone(page);
 	struct lruvec *lruvec;
 	int tail_count = 0;
+	int mmu_gather_count;
 
 	/* prevent PageLRU to go away from under us, and freeze lru stats */
 	spin_lock_irq(&zone->lru_lock);
 	lruvec = mem_cgroup_page_lruvec(page, zone);
 
+	/*
+	 * No mmu_gather_count increase can happen anymore because
+	 * here all pmds are already pmd_trans_splitting(). No
+	 * decrease can happen either because it's only decreased
+	 * while holding the lru_lock. So here the mmu_gather_count is
+	 * already stable so store it on the stack. Then it'll be
+	 * overwritten when the page_tail->index is initialized.
+	 */
+	mmu_gather_count = trans_huge_mmu_gather_count(page);
+
 	compound_lock(page);
 	/* complete memcg works before add pages to LRU */
 	mem_cgroup_split_huge_fixup(page);
@@ -1740,8 +1765,8 @@ static void __split_huge_page_refcount(struct page *page,
 		 * atomic_set() here would be safe on all archs (and
 		 * not only on x86), it's safer to use atomic_add().
 		 */
-		atomic_add(page_mapcount(page) + page_mapcount(page_tail) + 1,
-			   &page_tail->_count);
+		atomic_add(page_mapcount(page) + page_mapcount(page_tail) +
+			   mmu_gather_count + 1, &page_tail->_count);
 
 		/* after clearing PageTail the gup refcount can be released */
 		smp_mb__after_atomic();
@@ -2617,6 +2642,8 @@ static void collapse_huge_page(struct mm_struct *mm,
 	 */
 	smp_wmb();
 
+	init_trans_huge_mmu_gather_count(new_page);
+
 	spin_lock(pmd_ptl);
 	BUG_ON(!pmd_none(*pmd));
 	page_add_new_anon_rmap(new_page, vma, address);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4272d95..aeef65f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2087,6 +2087,20 @@ void free_hot_cold_page_list(struct list_head *list, bool cold)
 	}
 }
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+void free_trans_huge_page_list(struct list_head *list)
+{
+	struct page *page, *next;
+
+	/*
+	 * THP pages always use the default destructor so call it
+	 * directly and skip the pointer to function.
+	 */
+	list_for_each_entry_safe(page, next, list, lru)
+		__free_pages_ok(page, HPAGE_PMD_ORDER);
+}
+#endif
+
 /*
  * split_page takes a non-compound higher-order page, and splits it into
  * n (1<<order) sub-pages: page[0..n]
diff --git a/mm/swap.c b/mm/swap.c
index 39395fb..16d01a1 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -897,6 +897,38 @@ void lru_add_drain_all(void)
 	mutex_unlock(&lock);
 }
 
+static inline struct zone *zone_lru_lock(struct zone *zone,
+					 struct page *page,
+					 unsigned int *lock_batch,
+					 unsigned long *_flags)
+{
+	struct zone *pagezone = page_zone(page);
+
+	if (pagezone != zone) {
+		unsigned long flags = *_flags;
+
+		if (zone)
+			spin_unlock_irqrestore(&zone->lru_lock, flags);
+		*lock_batch = 0;
+		zone = pagezone;
+		spin_lock_irqsave(&zone->lru_lock, flags);
+
+		*_flags = flags;
+	}
+
+	return zone;
+}
+
+static inline struct zone *zone_lru_unlock(struct zone *zone,
+					   unsigned long flags)
+{
+	if (zone) {
+		spin_unlock_irqrestore(&zone->lru_lock, flags);
+		zone = NULL;
+	}
+	return zone;
+}
+
 /**
  * release_pages - batched page_cache_release()
  * @pages: array of pages to release
@@ -910,6 +942,7 @@ void release_pages(struct page **pages, int nr, bool cold)
 {
 	int i;
 	LIST_HEAD(pages_to_free);
+	LIST_HEAD(trans_huge_pages_to_free);
 	struct zone *zone = NULL;
 	struct lruvec *lruvec;
 	unsigned long uninitialized_var(flags);
@@ -917,12 +950,10 @@ void release_pages(struct page **pages, int nr, bool cold)
 
 	for (i = 0; i < nr; i++) {
 		struct page *page = pages[i];
+		const bool was_thp = is_trans_huge_page_release(page);
 
-		if (unlikely(PageCompound(page))) {
-			if (zone) {
-				spin_unlock_irqrestore(&zone->lru_lock, flags);
-				zone = NULL;
-			}
+		if (unlikely(!was_thp && PageCompound(page))) {
+			zone = zone_lru_unlock(zone, flags);
 			put_compound_page(page);
 			continue;
 		}
@@ -937,20 +968,65 @@ void release_pages(struct page **pages, int nr, bool cold)
 			zone = NULL;
 		}
 
+		if (was_thp) {
+			page = trans_huge_page_release_decode(page);
+			zone = zone_lru_lock(zone, page, &lock_batch, &flags);
+			/*
+			 * Here, after taking the lru_lock,
+			 * __split_huge_page_refcount() can't run
+			 * anymore from under us and in turn
+			 * PageTransHuge() retval is stable and can't
+			 * change anymore.
+			 *
+			 * PageTransHuge() has an helpful
+			 * VM_BUG_ON_PAGE() internally to enforce that
+			 * the page cannot be a tail here.
+			 */
+			if (unlikely(!PageTransHuge(page))) {
+				int idx;
+
+				/*
+				 * The THP page was splitted before we
+				 * could free it, in turn its tails
+				 * kept an elevated count because the
+				 * mmu_gather_count was transferred to
+				 * the tail page count during the
+				 * split.
+				 *
+				 * This is a very unlikely slow path,
+				 * performance is irrelevant here,
+				 * just keep it to the simplest.
+				 */
+				zone = zone_lru_unlock(zone, flags);
+
+				for (idx = 0; idx < HPAGE_PMD_NR;
+				     idx++, page++) {
+					VM_BUG_ON(PageTransCompound(page));
+					put_page(page);
+				}
+				continue;
+			} else
+				/*
+				 * __split_huge_page_refcount() cannot
+				 * run from under us, so we can
+				 * release the refence we had on the
+				 * mmu_gather_count as we don't care
+				 * anymore if the page gets splitted
+				 * or not. By now the TLB flush
+				 * already happened for this mapping,
+				 * so we don't need to prevent the
+				 * tails to be freed anymore.
+				 */
+				dec_trans_huge_mmu_gather_count(page);
+		}
+
 		if (!put_page_testzero(page))
 			continue;
 
 		if (PageLRU(page)) {
-			struct zone *pagezone = page_zone(page);
-
-			if (pagezone != zone) {
-				if (zone)
-					spin_unlock_irqrestore(&zone->lru_lock,
-									flags);
-				lock_batch = 0;
-				zone = pagezone;
-				spin_lock_irqsave(&zone->lru_lock, flags);
-			}
+			if (!was_thp)
+				zone = zone_lru_lock(zone, page, &lock_batch,
+						     &flags);
 
 			lruvec = mem_cgroup_page_lruvec(page, zone);
 			VM_BUG_ON_PAGE(!PageLRU(page), page);
@@ -958,16 +1034,30 @@ void release_pages(struct page **pages, int nr, bool cold)
 			del_page_from_lru_list(page, lruvec, page_off_lru(page));
 		}
 
-		/* Clear Active bit in case of parallel mark_page_accessed */
-		__ClearPageActive(page);
+		if (!was_thp) {
+			/*
+			 * Clear Active bit in case of parallel
+			 * mark_page_accessed.
+			 */
+			__ClearPageActive(page);
 
-		list_add(&page->lru, &pages_to_free);
+			list_add(&page->lru, &pages_to_free);
+		} else
+			list_add(&page->lru, &trans_huge_pages_to_free);
 	}
 	if (zone)
 		spin_unlock_irqrestore(&zone->lru_lock, flags);
 
-	mem_cgroup_uncharge_list(&pages_to_free);
-	free_hot_cold_page_list(&pages_to_free, cold);
+	if (!list_empty(&pages_to_free)) {
+		mem_cgroup_uncharge_list(&pages_to_free);
+		free_hot_cold_page_list(&pages_to_free, cold);
+	}
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	if (!list_empty(&trans_huge_pages_to_free)) {
+		mem_cgroup_uncharge_list(&trans_huge_pages_to_free);
+		free_trans_huge_page_list(&trans_huge_pages_to_free);
+	}
+#endif
 }
 EXPORT_SYMBOL(release_pages);
 
diff --git a/mm/swap_state.c b/mm/swap_state.c
index d504adb..386b69f 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -247,8 +247,13 @@ static inline void free_swap_cache(struct page *page)
  */
 void free_page_and_swap_cache(struct page *page)
 {
-	free_swap_cache(page);
-	page_cache_release(page);
+	if (!is_trans_huge_page_release(page)) {
+		free_swap_cache(page);
+		page_cache_release(page);
+	} else {
+		/* page might have to be decoded */
+		release_pages(&page, 1, false);
+	}
 }
 
 /*
@@ -261,8 +266,12 @@ void free_pages_and_swap_cache(struct page **pages, int nr)
 	int i;
 
 	lru_add_drain();
-	for (i = 0; i < nr; i++)
-		free_swap_cache(pagep[i]);
+	for (i = 0; i < nr; i++) {
+		struct page *page = pagep[i];
+		/* THP cannot be in swapcache and is also still encoded */
+		if (!is_trans_huge_page_release(page))
+			free_swap_cache(page);
+	}
 	release_pages(pagep, nr, false);
 }
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] mm: thp: put_huge_zero_page() with MMU gather
  2015-11-19 13:00 [PATCH 0/2] THP MMU gather Andrea Arcangeli
  2015-11-19 13:00 ` [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during " Andrea Arcangeli
@ 2015-11-19 13:00 ` Andrea Arcangeli
  1 sibling, 0 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2015-11-19 13:00 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: \"Kirill A. Shutemov\",
	Mel Gorman, Hugh Dickins, Johannes Weiner, Dave Hansen,
	Vlastimil Babka

This theoretical SMP race condition was found by code review and it
sounds impossible to reproduce as the shrinker would need to run
exactly at the right time and the window is too small.

put_huge_zero_page() because it can actually lead to the almost
immediate freeing of the THP zero page, must be run in the MMU gather
and not before the TLB flush like it is happening right now.

After doing a complete THP TLB flushing review, that followed the
development of the thp_mmu_gather fix, this bug was quickly found too,
but it's a fully orthogonal problem. This bug was inherited by sharing
the same TLB flushing model of the non-huge zero page (which is
static) for the huge zero page (which is dynamic).

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---
 include/linux/huge_mm.h | 13 +++++++++++++
 mm/huge_memory.c        |  6 +++---
 mm/swap.c               |  4 ++++
 3 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 0d8ef7d..f7ae08f 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -35,6 +35,7 @@ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 			int prot_numa);
 int vmf_insert_pfn_pmd(struct vm_area_struct *, unsigned long addr, pmd_t *,
 			unsigned long pfn, bool write);
+extern void put_huge_zero_page(void);
 
 enum transparent_hugepage_flag {
 	TRANSPARENT_HUGEPAGE_FLAG,
@@ -169,6 +170,11 @@ static inline bool is_trans_huge_page_release(struct page *page)
 	return (unsigned long) page & 1;
 }
 
+static inline bool is_huge_zero_page_release(struct page *page)
+{
+	return (unsigned long) page == ~0UL;
+}
+
 static inline struct page *trans_huge_page_release_decode(struct page *page)
 {
 	return (struct page *) ((unsigned long)page & ~1UL);
@@ -179,6 +185,12 @@ static inline struct page *trans_huge_page_release_encode(struct page *page)
 	return (struct page *) ((unsigned long)page | 1UL);
 }
 
+static inline struct page *huge_zero_page_release_encode(void)
+{
+	/* NOTE: is_trans_huge_page_release() must return true */
+	return (struct page *) (~0UL);
+}
+
 static inline atomic_t *__trans_huge_mmu_gather_count(struct page *page)
 {
 	return &(page + 1)->thp_mmu_gather;
@@ -289,6 +301,7 @@ static inline struct page *trans_huge_page_release_decode(struct page *page)
 }
 
 extern void dec_trans_huge_mmu_gather_count(struct page *page);
+extern bool is_huge_zero_page_release(struct page *page);
 
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e85027c..de7be83 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -200,7 +200,7 @@ retry:
 	return READ_ONCE(huge_zero_page);
 }
 
-static void put_huge_zero_page(void)
+void put_huge_zero_page(void)
 {
 	/*
 	 * Counter should never go to zero here. Only shrinker can put
@@ -1476,12 +1476,12 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 	if (vma_is_dax(vma)) {
 		spin_unlock(ptl);
 		if (is_huge_zero_pmd(orig_pmd))
-			put_huge_zero_page();
+			tlb_remove_page(tlb, huge_zero_page_release_encode());
 	} else if (is_huge_zero_pmd(orig_pmd)) {
 		pte_free(tlb->mm, pgtable_trans_huge_withdraw(tlb->mm, pmd));
 		atomic_long_dec(&tlb->mm->nr_ptes);
 		spin_unlock(ptl);
-		put_huge_zero_page();
+		tlb_remove_page(tlb, huge_zero_page_release_encode());
 	} else {
 		struct page *page = pmd_page(orig_pmd);
 		page_remove_rmap(page);
diff --git a/mm/swap.c b/mm/swap.c
index 16d01a1..37f9857 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -969,6 +969,10 @@ void release_pages(struct page **pages, int nr, bool cold)
 		}
 
 		if (was_thp) {
+			if (is_huge_zero_page_release(page)) {
+				put_huge_zero_page();
+				continue;
+			}
 			page = trans_huge_page_release_decode(page);
 			zone = zone_lru_lock(zone, page, &lock_batch, &flags);
 			/*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during MMU gather
  2015-11-19 13:00 ` [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during " Andrea Arcangeli
@ 2015-11-20  0:22   ` Andrew Morton
  2015-11-23 16:03     ` Andrea Arcangeli
  2015-12-07  9:30   ` Aneesh Kumar K.V
  1 sibling, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2015-11-20  0:22 UTC (permalink / raw)
  To: Andrea Arcangeli
  Cc: linux-mm, \"Kirill A. Shutemov\",
	Mel Gorman, Hugh Dickins, Johannes Weiner, Dave Hansen,
	Vlastimil Babka

On Thu, 19 Nov 2015 14:00:51 +0100 Andrea Arcangeli <aarcange@redhat.com> wrote:

> This theoretical SMP race condition was found with source review. No
> real life app could be affected as the result of freeing memory while
> accessing it is either undefined or it's a workload the produces no
> information.
> 
> For something to go wrong because the SMP race condition triggered,
> it'd require a further tiny window within the SMP race condition
> window. So nothing bad is happening in practice even if the SMP race
> condition triggers. It's still better to apply the fix to have the
> math guarantee.
> 
> The fix just adds a thp_mmu_gather atomic_t counter to the THP pages,
> so split_huge_page can elevate the tail page count accordingly and
> leave the tail page freeing task to whoever elevated thp_mmu_gather.
> 

This is a pretty nasty patch :( We now have random page*'s with bit 0
set floating around in mmu_gather.__pages[].  It assumes/requires that
nobody uses those pages until they hit release_pages().  And the tlb
flushing code is pretty twisty, with various Kconfig and arch dependent
handlers.

Is there no nicer way?

> +/*
> + * free_trans_huge_page_list() is used to free the pages returned by
> + * trans_huge_page_release() (if still PageTransHuge()) in
> + * release_pages().
> + */

There is no function trans_huge_page_release().

> +extern void free_trans_huge_page_list(struct list_head *list);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during MMU gather
  2015-11-20  0:22   ` Andrew Morton
@ 2015-11-23 16:03     ` Andrea Arcangeli
  2015-12-05  8:24       ` Aneesh Kumar K.V
  0 siblings, 1 reply; 9+ messages in thread
From: Andrea Arcangeli @ 2015-11-23 16:03 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, \"Kirill A. Shutemov\",
	Mel Gorman, Hugh Dickins, Johannes Weiner, Dave Hansen,
	Vlastimil Babka

On Thu, Nov 19, 2015 at 04:22:55PM -0800, Andrew Morton wrote:
> On Thu, 19 Nov 2015 14:00:51 +0100 Andrea Arcangeli <aarcange@redhat.com> wrote:
> 
> > This theoretical SMP race condition was found with source review. No
> > real life app could be affected as the result of freeing memory while
> > accessing it is either undefined or it's a workload the produces no
> > information.
> > 
> > For something to go wrong because the SMP race condition triggered,
> > it'd require a further tiny window within the SMP race condition
> > window. So nothing bad is happening in practice even if the SMP race
> > condition triggers. It's still better to apply the fix to have the
> > math guarantee.
> > 
> > The fix just adds a thp_mmu_gather atomic_t counter to the THP pages,
> > so split_huge_page can elevate the tail page count accordingly and
> > leave the tail page freeing task to whoever elevated thp_mmu_gather.
> > 
> 
> This is a pretty nasty patch :( We now have random page*'s with bit 0
> set floating around in mmu_gather.__pages[].  It assumes/requires that

Yes, and bit 0 is only relevant for the mmu_gather structure and its
users.

> nobody uses those pages until they hit release_pages().  And the tlb
> flushing code is pretty twisty, with various Kconfig and arch dependent
> handlers.

I already reviewed all callers and the mmu_gather freeing path for all
archs to be sure they all take the two paths that I updated in order
to free the pages. They call free_page_and_swap_cache or
free_pages_and_swap_cache.

The freeing is using common code VM functions, and it shouldn't
improvise calling put_page() manually, the freeing has to take care of
collecting the swapcache if needed etc... it has to deal with VM
details the arch is not aware about.

> Is there no nicer way?

We can grow the size of the mmu_gather to keep track that the page was
THP before the pmd_lock was dropped, in a separate bit from the struct
page pointer, but it'll take more memory.

This bit 0 looks a walk in the park if compared to the bit 0 in
page->compound_head that was just introduced. The compound_head bit 0
isn't only visible to the mmu_gather users (which should never try to
mess with the page pointer themself) and it collides with lru/next,
rcu_head.next users.

If you prefer me to enlarge the mmu_gather structure I can do that.

1 bit of extra information needs to be extracted before dropping the
pmd_lock in zap_huge_pmd() and it has to be available in
release_pages(), to know if the tail pages needs an explicit put_page
or not. It's basically a bit in the local stack, except it's not in
the local stack because it is an array of pages, so it needs many
bits and it's stored in the mmu_gather along the page.

Aside from the implementation of bit 0, I can't think of a simpler
design that provides for the same performance and low locking overhead
(this patch actually looks like an optimization when it's not helping
to prevent the race, because to fix the race I had to reduce the
number of times the lru_lock is taken in release_pages).

> > +/*
> > + * free_trans_huge_page_list() is used to free the pages returned by
> > + * trans_huge_page_release() (if still PageTransHuge()) in
> > + * release_pages().
> > + */
> 
> There is no function trans_huge_page_release().

Oops I updated the function name but not the comment... thanks!
Andrea

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index f7ae08f..2810322 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -225,9 +225,8 @@ static inline int trans_huge_mmu_gather_count(struct page *page)
 }
 
 /*
- * free_trans_huge_page_list() is used to free the pages returned by
- * trans_huge_page_release() (if still PageTransHuge()) in
- * release_pages().
+ * free_trans_huge_page_list() is used to free THP pages (if still
+ * PageTransHuge()) in release_pages().
  */
 extern void free_trans_huge_page_list(struct list_head *list);
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during MMU gather
  2015-11-23 16:03     ` Andrea Arcangeli
@ 2015-12-05  8:24       ` Aneesh Kumar K.V
  2015-12-07 14:44         ` Andrea Arcangeli
  0 siblings, 1 reply; 9+ messages in thread
From: Aneesh Kumar K.V @ 2015-12-05  8:24 UTC (permalink / raw)
  To: Andrea Arcangeli, Andrew Morton
  Cc: linux-mm, \"Kirill A. Shutemov\",
	Mel Gorman, Hugh Dickins, Johannes Weiner, Dave Hansen,
	Vlastimil Babka

Andrea Arcangeli <aarcange@redhat.com> writes:

> On Thu, Nov 19, 2015 at 04:22:55PM -0800, Andrew Morton wrote:
>> On Thu, 19 Nov 2015 14:00:51 +0100 Andrea Arcangeli <aarcange@redhat.com> wrote:
>> 
>> > This theoretical SMP race condition was found with source review. No
>> > real life app could be affected as the result of freeing memory while
>> > accessing it is either undefined or it's a workload the produces no
>> > information.
>> > 
>> > For something to go wrong because the SMP race condition triggered,
>> > it'd require a further tiny window within the SMP race condition
>> > window. So nothing bad is happening in practice even if the SMP race
>> > condition triggers. It's still better to apply the fix to have the
>> > math guarantee.
>> > 
>> > The fix just adds a thp_mmu_gather atomic_t counter to the THP pages,
>> > so split_huge_page can elevate the tail page count accordingly and
>> > leave the tail page freeing task to whoever elevated thp_mmu_gather.
>> > 
>> 
>> This is a pretty nasty patch :( We now have random page*'s with bit 0
>> set floating around in mmu_gather.__pages[].  It assumes/requires that
>
> Yes, and bit 0 is only relevant for the mmu_gather structure and its
> users.
>
>> nobody uses those pages until they hit release_pages().  And the tlb
>> flushing code is pretty twisty, with various Kconfig and arch dependent
>> handlers.
>
> I already reviewed all callers and the mmu_gather freeing path for all
> archs to be sure they all take the two paths that I updated in order
> to free the pages. They call free_page_and_swap_cache or
> free_pages_and_swap_cache.
>
> The freeing is using common code VM functions, and it shouldn't
> improvise calling put_page() manually, the freeing has to take care of
> collecting the swapcache if needed etc... it has to deal with VM
> details the arch is not aware about.


But it is still lot of really complicated code. 

>
>> Is there no nicer way?
>
> We can grow the size of the mmu_gather to keep track that the page was
> THP before the pmd_lock was dropped, in a separate bit from the struct
> page pointer, but it'll take more memory.
>
> This bit 0 looks a walk in the park if compared to the bit 0 in
> page->compound_head that was just introduced. The compound_head bit 0
> isn't only visible to the mmu_gather users (which should never try to
> mess with the page pointer themself) and it collides with lru/next,
> rcu_head.next users.
>
> If you prefer me to enlarge the mmu_gather structure I can do that.
>

If we can update mmu_gather to track the page size of the pages, that
will also help some archs to better implement tlb_flush(struct
mmu_gather *). Right now arch/powerpc/mm/tlb_nohash.c does flush the tlb
mapping for the entire mm_struct. 

we can also make sure that we do a force flush when we are trying to
gather pages of different size. So one instance of mmu_gather will end
up gathering pages of specific size only ?


> 1 bit of extra information needs to be extracted before dropping the
> pmd_lock in zap_huge_pmd() and it has to be available in
> release_pages(), to know if the tail pages needs an explicit put_page
> or not. It's basically a bit in the local stack, except it's not in
> the local stack because it is an array of pages, so it needs many
> bits and it's stored in the mmu_gather along the page.
>
> Aside from the implementation of bit 0, I can't think of a simpler
> design that provides for the same performance and low locking overhead
> (this patch actually looks like an optimization when it's not helping
> to prevent the race, because to fix the race I had to reduce the
> number of times the lru_lock is taken in release_pages).
>
>> > +/*
>> > + * free_trans_huge_page_list() is used to free the pages returned by
>> > + * trans_huge_page_release() (if still PageTransHuge()) in
>> > + * release_pages().
>> > + */
>> 
>> There is no function trans_huge_page_release().
>
> Oops I updated the function name but not the comment... thanks!
> Andrea
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index f7ae08f..2810322 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -225,9 +225,8 @@ static inline int trans_huge_mmu_gather_count(struct page *page)
>  }
>  
>  /*
> - * free_trans_huge_page_list() is used to free the pages returned by
> - * trans_huge_page_release() (if still PageTransHuge()) in
> - * release_pages().
> + * free_trans_huge_page_list() is used to free THP pages (if still
> + * PageTransHuge()) in release_pages().
>   */
>  extern void free_trans_huge_page_list(struct list_head *list);
>  
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during MMU gather
  2015-11-19 13:00 ` [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during " Andrea Arcangeli
  2015-11-20  0:22   ` Andrew Morton
@ 2015-12-07  9:30   ` Aneesh Kumar K.V
  2015-12-07 15:11     ` Andrea Arcangeli
  1 sibling, 1 reply; 9+ messages in thread
From: Aneesh Kumar K.V @ 2015-12-07  9:30 UTC (permalink / raw)
  To: Andrea Arcangeli, linux-mm, Andrew Morton
  Cc: \"Kirill A. Shutemov\",
	Mel Gorman, Hugh Dickins, Johannes Weiner, Dave Hansen,
	Vlastimil Babka

Andrea Arcangeli <aarcange@redhat.com> writes:

> This theoretical SMP race condition was found with source review. No
> real life app could be affected as the result of freeing memory while
> accessing it is either undefined or it's a workload the produces no
> information.
>
> For something to go wrong because the SMP race condition triggered,
> it'd require a further tiny window within the SMP race condition
> window. So nothing bad is happening in practice even if the SMP race
> condition triggers. It's still better to apply the fix to have the
> math guarantee.
>
> The fix just adds a thp_mmu_gather atomic_t counter to the THP pages,
> so split_huge_page can elevate the tail page count accordingly and
> leave the tail page freeing task to whoever elevated thp_mmu_gather.
>

Will this be a problem after
http://article.gmane.org/gmane.linux.kernel.mm/139631  
"[PATCHv12 00/37] THP refcounting redesign" ?

-aneesh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during MMU gather
  2015-12-05  8:24       ` Aneesh Kumar K.V
@ 2015-12-07 14:44         ` Andrea Arcangeli
  0 siblings, 0 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2015-12-07 14:44 UTC (permalink / raw)
  To: Aneesh Kumar K.V
  Cc: Andrew Morton, linux-mm, \"Kirill A. Shutemov\",
	Mel Gorman, Hugh Dickins, Johannes Weiner, Dave Hansen,
	Vlastimil Babka

On Sat, Dec 05, 2015 at 01:54:51PM +0530, Aneesh Kumar K.V wrote:
> If we can update mmu_gather to track the page size of the pages, that
> will also help some archs to better implement tlb_flush(struct
> mmu_gather *). Right now arch/powerpc/mm/tlb_nohash.c does flush the tlb
> mapping for the entire mm_struct. 
> 
> we can also make sure that we do a force flush when we are trying to
> gather pages of different size. So one instance of mmu_gather will end
> up gathering pages of specific size only ?

Tracking the TLB flush of multiple page sizes won't bring down the
complexity of the fix though, in fact the multiple page sizes are
arch-knowledge so such improvement would need to break the arch API
of the MMU gather.

THP is a common code abstraction, so the fix is self contained into
the common code and it can't take more than one bit to encode the
flush size because THP supports only one page size.

To achieve the multiple TLB flush size we could use an array of
unsigned long long physaddr where the bits below PAGE_SHIFT are the
page order. That would however require a pfn_to_page then to free the
page, so it's probably better to have the page struct and a order in
two different fields and double up the array size of the MMU
gather. Then we could as well look if we can go cross-mm so that it's
usable for the rmap-walk too, which is what I was looking into when I
found the THP SMP TLB flushing theoretical race.

In my view this is even more complicated from an implementation
standpoint because it isn't self contained in the common code. So I
doubt it's worth mixing the optimization in arch code for hugetlbfs
with the THP race fix that is all common code knowledge and it's
actually a fix (albeit purely theoretical) and not an optimization.

Thanks,
Andrea

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during MMU gather
  2015-12-07  9:30   ` Aneesh Kumar K.V
@ 2015-12-07 15:11     ` Andrea Arcangeli
  0 siblings, 0 replies; 9+ messages in thread
From: Andrea Arcangeli @ 2015-12-07 15:11 UTC (permalink / raw)
  To: Aneesh Kumar K.V
  Cc: linux-mm, Andrew Morton, \"Kirill A. Shutemov\",
	Mel Gorman, Hugh Dickins, Johannes Weiner, Dave Hansen,
	Vlastimil Babka

On Mon, Dec 07, 2015 at 03:00:53PM +0530, Aneesh Kumar K.V wrote:
> Andrea Arcangeli <aarcange@redhat.com> writes:
> 
> > This theoretical SMP race condition was found with source review. No
> > real life app could be affected as the result of freeing memory while
> > accessing it is either undefined or it's a workload the produces no
> > information.
> >
> > For something to go wrong because the SMP race condition triggered,
> > it'd require a further tiny window within the SMP race condition
> > window. So nothing bad is happening in practice even if the SMP race
> > condition triggers. It's still better to apply the fix to have the
> > math guarantee.
> >
> > The fix just adds a thp_mmu_gather atomic_t counter to the THP pages,
> > so split_huge_page can elevate the tail page count accordingly and
> > leave the tail page freeing task to whoever elevated thp_mmu_gather.
> >
> 
> Will this be a problem after
> http://article.gmane.org/gmane.linux.kernel.mm/139631  
> "[PATCHv12 00/37] THP refcounting redesign" ?

The THP zero page SMP TLB flushing race (patch 2/2) is definitely
still needed even with the THP refcounting redesign applied (perhaps
it'll reject but the problem remains exactly the same).

The MMU gather part (patch 1/2) as far as I can tell it's still needed
too because split_huge_page bails out on gup pins only (which is the
primary difference, as previously split_huge_page was forbidden to
fail to guarantee a graceful fallback into the legacy code after a
split_huge_page_pmd, but that introduced the need of more complex
put_page for tail pages to deal with the gup tail pins). There are no
gup pins involved in this race and put_page may still free the tails
in __split_huge_page despite the MMU gather THP TLB flush may not have
run yet (there's even still the comment about it in __split_huge_page
confirming this, so unless that comment is also wrong the theoretical
SMP race fix is needed). The locking in the __split_huge_page with the
refcounting redesign applied still retains the lru_lock so it would
also still allow to fix the race for good, with the refcounting
redesign, in the same way. Kirill please correct me if I overlooked
something in your patchset.

Thanks,
Andrea

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-12-07 15:11 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-19 13:00 [PATCH 0/2] THP MMU gather Andrea Arcangeli
2015-11-19 13:00 ` [PATCH 1/2] mm: thp: introduce thp_mmu_gather to pin tail pages during " Andrea Arcangeli
2015-11-20  0:22   ` Andrew Morton
2015-11-23 16:03     ` Andrea Arcangeli
2015-12-05  8:24       ` Aneesh Kumar K.V
2015-12-07 14:44         ` Andrea Arcangeli
2015-12-07  9:30   ` Aneesh Kumar K.V
2015-12-07 15:11     ` Andrea Arcangeli
2015-11-19 13:00 ` [PATCH 2/2] mm: thp: put_huge_zero_page() with " Andrea Arcangeli

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.