From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> To: Andrew Morton <akpm@linux-foundation.org>, Andrea Arcangeli <aarcange@redhat.com>, Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com>, Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>, Vlastimil Babka <vbabka@suse.cz>, Christoph Lameter <cl@gentwo.org>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, Steve Capper <steve.capper@linaro.org>, "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>, Johannes Weiner <hannes@cmpxchg.org>, Michal Hocko <mhocko@suse.cz>, Jerome Marchand <jmarchan@redhat.com>, Sasha Levin <sasha.levin@oracle.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCHv7 34/36] thp: introduce deferred_split_huge_page() Date: Tue, 23 Jun 2015 16:46:44 +0300 [thread overview] Message-ID: <1435067206-92901-35-git-send-email-kirill.shutemov@linux.intel.com> (raw) In-Reply-To: <1435067206-92901-1-git-send-email-kirill.shutemov@linux.intel.com> Currently we don't split huge page on partial unmap. It's not an ideal situation. It can lead to memory overhead. Furtunately, we can detect partial unmap on page_remove_rmap(). But we cannot call split_huge_page() from there due to locking context. It's also counterproductive to do directly from munmap() codepath: in many cases we will hit this from exit(2) and splitting the huge page just to free it up in small pages is not what we really want. The patch introduce deferred_split_huge_page() which put the huge page into queue for splitting. The splitting itself will happen when we get memory pressure via shrinker interface. The page will be dropped from list on freeing through compound page destructor. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: Sasha Levin <sasha.levin@oracle.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> --- include/linux/huge_mm.h | 4 ++ include/linux/mm.h | 2 + mm/huge_memory.c | 127 ++++++++++++++++++++++++++++++++++++++++++++++-- mm/migrate.c | 1 + mm/page_alloc.c | 2 +- mm/rmap.c | 7 ++- 6 files changed, 138 insertions(+), 5 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 89981a042d85..c1cca36c73db 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -92,11 +92,14 @@ extern bool is_vma_temporary_stack(struct vm_area_struct *vma); extern unsigned long transparent_hugepage_flags; +extern void prep_transhuge_page(struct page *page); + int split_huge_page_to_list(struct page *page, struct list_head *list); static inline int split_huge_page(struct page *page) { return split_huge_page_to_list(page, NULL); } +void deferred_split_huge_page(struct page *page); void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long address); @@ -174,6 +177,7 @@ static inline int split_huge_page(struct page *page) { return 0; } +static inline void deferred_split_huge_page(struct page *page) {} #define split_huge_pmd(__vma, __pmd, __address) \ do { } while (0) static inline int hugepage_madvise(struct vm_area_struct *vma, diff --git a/include/linux/mm.h b/include/linux/mm.h index 0786ca13b17e..cd7da6d0e6fe 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -522,6 +522,8 @@ static inline void set_compound_order(struct page *page, unsigned long order) page[1].compound_order = order; } +void free_compound_page(struct page *page); + #ifdef CONFIG_MMU /* * Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bd8c83e2f466..8d0d77b726ca 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -71,6 +71,8 @@ static int khugepaged(void *none); static int khugepaged_slab_init(void); static void khugepaged_slab_exit(void); +static void free_transhuge_page(struct page *page); + #define MM_SLOTS_HASH_BITS 10 static __read_mostly DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS); @@ -105,6 +107,10 @@ static struct khugepaged_scan khugepaged_scan = { .mm_head = LIST_HEAD_INIT(khugepaged_scan.mm_head), }; +static DEFINE_SPINLOCK(split_queue_lock); +static LIST_HEAD(split_queue); +static unsigned long split_queue_len; +static struct shrinker deferred_split_shrinker; static int set_recommended_min_free_kbytes(void) { @@ -643,6 +649,9 @@ static int __init hugepage_init(void) err = register_shrinker(&huge_zero_page_shrinker); if (err) goto err_hzp_shrinker; + err = register_shrinker(&deferred_split_shrinker); + if (err) + goto err_split_shrinker; /* * By default disable transparent hugepages on smaller systems, @@ -660,6 +669,8 @@ static int __init hugepage_init(void) return 0; err_khugepaged: + unregister_shrinker(&deferred_split_shrinker); +err_split_shrinker: unregister_shrinker(&huge_zero_page_shrinker); err_hzp_shrinker: khugepaged_slab_exit(); @@ -716,6 +727,19 @@ static inline pmd_t mk_huge_pmd(struct page *page, pgprot_t prot) return entry; } +void prep_transhuge_page(struct page *page) +{ + /* we use page->lru in second tail page: assuming THP order >= 2 */ + BUILD_BUG_ON(HPAGE_PMD_ORDER < 2); + + /* + * ->lru in the first tail page is occupied by destructor + * and order of the compound page + */ + INIT_LIST_HEAD(&page[2].lru); + set_compound_page_dtor(page, free_transhuge_page); +} + static int __do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd, @@ -868,6 +892,7 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, count_vm_event(THP_FAULT_FALLBACK); return VM_FAULT_FALLBACK; } + prep_transhuge_page(page); return __do_huge_pmd_anonymous_page(mm, vma, haddr, pmd, page, gfp, flags); } @@ -1120,7 +1145,9 @@ alloc: } else new_page = NULL; - if (unlikely(!new_page)) { + if (likely(new_page)) { + prep_transhuge_page(new_page); + } else { if (!page) { split_huge_pmd(vma, pmd, address); ret |= VM_FAULT_FALLBACK; @@ -2045,6 +2072,7 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp, struct mm_struct *mm, return NULL; } + prep_transhuge_page(*hpage); count_vm_event(THP_COLLAPSE_ALLOC); return *hpage; } @@ -2056,8 +2084,12 @@ static int khugepaged_find_target_node(void) static inline struct page *alloc_hugepage(int defrag) { - return alloc_pages(alloc_hugepage_gfpmask(defrag, 0), - HPAGE_PMD_ORDER); + struct page *page; + + page = alloc_pages(alloc_hugepage_gfpmask(defrag, 0), HPAGE_PMD_ORDER); + if (page) + prep_transhuge_page(page); + return page; } static struct page *khugepaged_alloc_hugepage(bool *wait) @@ -2957,6 +2989,13 @@ static void __split_huge_page(struct page *page, struct list_head *list) spin_lock_irq(&zone->lru_lock); lruvec = mem_cgroup_page_lruvec(head, zone); + spin_lock(&split_queue_lock); + if (!list_empty(&head[2].lru)) { + split_queue_len--; + list_del(&head[2].lru); + } + spin_unlock(&split_queue_lock); + /* complete memcg works before add pages to LRU */ mem_cgroup_split_huge_fixup(head); @@ -3068,3 +3107,85 @@ out: count_vm_event(!ret ? THP_SPLIT_PAGE : THP_SPLIT_PAGE_FAILED); return ret; } + +static void free_transhuge_page(struct page *page) +{ + unsigned long flags; + + spin_lock_irqsave(&split_queue_lock, flags); + if (!list_empty(&page[2].lru)) { + split_queue_len--; + list_del(&page[2].lru); + } + spin_unlock_irqrestore(&split_queue_lock, flags); + free_compound_page(page); +} + +void deferred_split_huge_page(struct page *page) +{ + unsigned long flags; + + VM_BUG_ON_PAGE(!PageTransHuge(page), page); + + spin_lock_irqsave(&split_queue_lock, flags); + if (list_empty(&page[2].lru)) { + list_add_tail(&page[2].lru, &split_queue); + split_queue_len++; + } + spin_unlock_irqrestore(&split_queue_lock, flags); +} + +static unsigned long deferred_split_count(struct shrinker *shrink, + struct shrink_control *sc) +{ + /* + * Split a page from split_queue will free up at least one page, + * at most HPAGE_PMD_NR - 1. We don't track exact number. + * Let's use HPAGE_PMD_NR / 2 as ballpark. + */ + return ACCESS_ONCE(split_queue_len) * HPAGE_PMD_NR / 2; +} + +static unsigned long deferred_split_scan(struct shrinker *shrink, + struct shrink_control *sc) +{ + unsigned long flags; + LIST_HEAD(list); + struct page *page, *next; + int split = 0; + + spin_lock_irqsave(&split_queue_lock, flags); + list_splice_init(&split_queue, &list); + + /* Take pin on all head pages to avoid freeing them under us */ + list_for_each_entry_safe(page, next, &list, lru) { + page = compound_head(page); + /* race with put_compound_page() */ + if (!get_page_unless_zero(page)) { + list_del_init(&page[2].lru); + split_queue_len--; + } + } + spin_unlock_irqrestore(&split_queue_lock, flags); + + list_for_each_entry_safe(page, next, &list, lru) { + lock_page(page); + /* split_huge_page() removes page from list on success */ + if (!split_huge_page(page)) + split++; + unlock_page(page); + put_page(page); + } + + spin_lock_irqsave(&split_queue_lock, flags); + list_splice_tail(&list, &split_queue); + spin_unlock_irqrestore(&split_queue_lock, flags); + + return split * HPAGE_PMD_NR / 2; +} + +static struct shrinker deferred_split_shrinker = { + .count_objects = deferred_split_count, + .scan_objects = deferred_split_scan, + .seeks = DEFAULT_SEEKS, +}; diff --git a/mm/migrate.c b/mm/migrate.c index 8bb2107b8751..4c79c5447623 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1742,6 +1742,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, HPAGE_PMD_ORDER); if (!new_page) goto out_fail; + prep_transhuge_page(new_page); isolated = numamigrate_isolate_page(pgdat, page); if (!isolated) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1e1f5898172b..02815f91c3c3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -430,7 +430,7 @@ out: * This usage means that zero-order pages may not be compound. */ -static void free_compound_page(struct page *page) +void free_compound_page(struct page *page) { __free_pages_ok(page, compound_order(page)); } diff --git a/mm/rmap.c b/mm/rmap.c index 956305a8f5cc..1d138aada15c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1204,8 +1204,10 @@ static void page_remove_anon_compound_rmap(struct page *page) nr = HPAGE_PMD_NR; } - if (nr) + if (nr) { __mod_zone_page_state(page_zone(page), NR_ANON_PAGES, -nr); + deferred_split_huge_page(page); + } } /** @@ -1240,6 +1242,9 @@ void page_remove_rmap(struct page *page, bool compound) if (unlikely(PageMlocked(page))) clear_page_mlock(page); + if (PageTransCompound(page)) + deferred_split_huge_page(compound_head(page)); + /* * It would be tidy to reset the PageAnon mapping here, * but that might overwrite a racing page_add_anon_rmap -- 2.1.4
WARNING: multiple messages have this Message-ID (diff)
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> To: Andrew Morton <akpm@linux-foundation.org>, Andrea Arcangeli <aarcange@redhat.com>, Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com>, Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>, Vlastimil Babka <vbabka@suse.cz>, Christoph Lameter <cl@gentwo.org>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, Steve Capper <steve.capper@linaro.org>, "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>, Johannes Weiner <hannes@cmpxchg.org>, Michal Hocko <mhocko@suse.cz>, Jerome Marchand <jmarchan@redhat.com>, Sasha Levin <sasha.levin@oracle.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCHv7 34/36] thp: introduce deferred_split_huge_page() Date: Tue, 23 Jun 2015 16:46:44 +0300 [thread overview] Message-ID: <1435067206-92901-35-git-send-email-kirill.shutemov@linux.intel.com> (raw) In-Reply-To: <1435067206-92901-1-git-send-email-kirill.shutemov@linux.intel.com> Currently we don't split huge page on partial unmap. It's not an ideal situation. It can lead to memory overhead. Furtunately, we can detect partial unmap on page_remove_rmap(). But we cannot call split_huge_page() from there due to locking context. It's also counterproductive to do directly from munmap() codepath: in many cases we will hit this from exit(2) and splitting the huge page just to free it up in small pages is not what we really want. The patch introduce deferred_split_huge_page() which put the huge page into queue for splitting. The splitting itself will happen when we get memory pressure via shrinker interface. The page will be dropped from list on freeing through compound page destructor. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: Sasha Levin <sasha.levin@oracle.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> --- include/linux/huge_mm.h | 4 ++ include/linux/mm.h | 2 + mm/huge_memory.c | 127 ++++++++++++++++++++++++++++++++++++++++++++++-- mm/migrate.c | 1 + mm/page_alloc.c | 2 +- mm/rmap.c | 7 ++- 6 files changed, 138 insertions(+), 5 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 89981a042d85..c1cca36c73db 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -92,11 +92,14 @@ extern bool is_vma_temporary_stack(struct vm_area_struct *vma); extern unsigned long transparent_hugepage_flags; +extern void prep_transhuge_page(struct page *page); + int split_huge_page_to_list(struct page *page, struct list_head *list); static inline int split_huge_page(struct page *page) { return split_huge_page_to_list(page, NULL); } +void deferred_split_huge_page(struct page *page); void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long address); @@ -174,6 +177,7 @@ static inline int split_huge_page(struct page *page) { return 0; } +static inline void deferred_split_huge_page(struct page *page) {} #define split_huge_pmd(__vma, __pmd, __address) \ do { } while (0) static inline int hugepage_madvise(struct vm_area_struct *vma, diff --git a/include/linux/mm.h b/include/linux/mm.h index 0786ca13b17e..cd7da6d0e6fe 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -522,6 +522,8 @@ static inline void set_compound_order(struct page *page, unsigned long order) page[1].compound_order = order; } +void free_compound_page(struct page *page); + #ifdef CONFIG_MMU /* * Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bd8c83e2f466..8d0d77b726ca 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -71,6 +71,8 @@ static int khugepaged(void *none); static int khugepaged_slab_init(void); static void khugepaged_slab_exit(void); +static void free_transhuge_page(struct page *page); + #define MM_SLOTS_HASH_BITS 10 static __read_mostly DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS); @@ -105,6 +107,10 @@ static struct khugepaged_scan khugepaged_scan = { .mm_head = LIST_HEAD_INIT(khugepaged_scan.mm_head), }; +static DEFINE_SPINLOCK(split_queue_lock); +static LIST_HEAD(split_queue); +static unsigned long split_queue_len; +static struct shrinker deferred_split_shrinker; static int set_recommended_min_free_kbytes(void) { @@ -643,6 +649,9 @@ static int __init hugepage_init(void) err = register_shrinker(&huge_zero_page_shrinker); if (err) goto err_hzp_shrinker; + err = register_shrinker(&deferred_split_shrinker); + if (err) + goto err_split_shrinker; /* * By default disable transparent hugepages on smaller systems, @@ -660,6 +669,8 @@ static int __init hugepage_init(void) return 0; err_khugepaged: + unregister_shrinker(&deferred_split_shrinker); +err_split_shrinker: unregister_shrinker(&huge_zero_page_shrinker); err_hzp_shrinker: khugepaged_slab_exit(); @@ -716,6 +727,19 @@ static inline pmd_t mk_huge_pmd(struct page *page, pgprot_t prot) return entry; } +void prep_transhuge_page(struct page *page) +{ + /* we use page->lru in second tail page: assuming THP order >= 2 */ + BUILD_BUG_ON(HPAGE_PMD_ORDER < 2); + + /* + * ->lru in the first tail page is occupied by destructor + * and order of the compound page + */ + INIT_LIST_HEAD(&page[2].lru); + set_compound_page_dtor(page, free_transhuge_page); +} + static int __do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd, @@ -868,6 +892,7 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, count_vm_event(THP_FAULT_FALLBACK); return VM_FAULT_FALLBACK; } + prep_transhuge_page(page); return __do_huge_pmd_anonymous_page(mm, vma, haddr, pmd, page, gfp, flags); } @@ -1120,7 +1145,9 @@ alloc: } else new_page = NULL; - if (unlikely(!new_page)) { + if (likely(new_page)) { + prep_transhuge_page(new_page); + } else { if (!page) { split_huge_pmd(vma, pmd, address); ret |= VM_FAULT_FALLBACK; @@ -2045,6 +2072,7 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp, struct mm_struct *mm, return NULL; } + prep_transhuge_page(*hpage); count_vm_event(THP_COLLAPSE_ALLOC); return *hpage; } @@ -2056,8 +2084,12 @@ static int khugepaged_find_target_node(void) static inline struct page *alloc_hugepage(int defrag) { - return alloc_pages(alloc_hugepage_gfpmask(defrag, 0), - HPAGE_PMD_ORDER); + struct page *page; + + page = alloc_pages(alloc_hugepage_gfpmask(defrag, 0), HPAGE_PMD_ORDER); + if (page) + prep_transhuge_page(page); + return page; } static struct page *khugepaged_alloc_hugepage(bool *wait) @@ -2957,6 +2989,13 @@ static void __split_huge_page(struct page *page, struct list_head *list) spin_lock_irq(&zone->lru_lock); lruvec = mem_cgroup_page_lruvec(head, zone); + spin_lock(&split_queue_lock); + if (!list_empty(&head[2].lru)) { + split_queue_len--; + list_del(&head[2].lru); + } + spin_unlock(&split_queue_lock); + /* complete memcg works before add pages to LRU */ mem_cgroup_split_huge_fixup(head); @@ -3068,3 +3107,85 @@ out: count_vm_event(!ret ? THP_SPLIT_PAGE : THP_SPLIT_PAGE_FAILED); return ret; } + +static void free_transhuge_page(struct page *page) +{ + unsigned long flags; + + spin_lock_irqsave(&split_queue_lock, flags); + if (!list_empty(&page[2].lru)) { + split_queue_len--; + list_del(&page[2].lru); + } + spin_unlock_irqrestore(&split_queue_lock, flags); + free_compound_page(page); +} + +void deferred_split_huge_page(struct page *page) +{ + unsigned long flags; + + VM_BUG_ON_PAGE(!PageTransHuge(page), page); + + spin_lock_irqsave(&split_queue_lock, flags); + if (list_empty(&page[2].lru)) { + list_add_tail(&page[2].lru, &split_queue); + split_queue_len++; + } + spin_unlock_irqrestore(&split_queue_lock, flags); +} + +static unsigned long deferred_split_count(struct shrinker *shrink, + struct shrink_control *sc) +{ + /* + * Split a page from split_queue will free up at least one page, + * at most HPAGE_PMD_NR - 1. We don't track exact number. + * Let's use HPAGE_PMD_NR / 2 as ballpark. + */ + return ACCESS_ONCE(split_queue_len) * HPAGE_PMD_NR / 2; +} + +static unsigned long deferred_split_scan(struct shrinker *shrink, + struct shrink_control *sc) +{ + unsigned long flags; + LIST_HEAD(list); + struct page *page, *next; + int split = 0; + + spin_lock_irqsave(&split_queue_lock, flags); + list_splice_init(&split_queue, &list); + + /* Take pin on all head pages to avoid freeing them under us */ + list_for_each_entry_safe(page, next, &list, lru) { + page = compound_head(page); + /* race with put_compound_page() */ + if (!get_page_unless_zero(page)) { + list_del_init(&page[2].lru); + split_queue_len--; + } + } + spin_unlock_irqrestore(&split_queue_lock, flags); + + list_for_each_entry_safe(page, next, &list, lru) { + lock_page(page); + /* split_huge_page() removes page from list on success */ + if (!split_huge_page(page)) + split++; + unlock_page(page); + put_page(page); + } + + spin_lock_irqsave(&split_queue_lock, flags); + list_splice_tail(&list, &split_queue); + spin_unlock_irqrestore(&split_queue_lock, flags); + + return split * HPAGE_PMD_NR / 2; +} + +static struct shrinker deferred_split_shrinker = { + .count_objects = deferred_split_count, + .scan_objects = deferred_split_scan, + .seeks = DEFAULT_SEEKS, +}; diff --git a/mm/migrate.c b/mm/migrate.c index 8bb2107b8751..4c79c5447623 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1742,6 +1742,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm, HPAGE_PMD_ORDER); if (!new_page) goto out_fail; + prep_transhuge_page(new_page); isolated = numamigrate_isolate_page(pgdat, page); if (!isolated) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1e1f5898172b..02815f91c3c3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -430,7 +430,7 @@ out: * This usage means that zero-order pages may not be compound. */ -static void free_compound_page(struct page *page) +void free_compound_page(struct page *page) { __free_pages_ok(page, compound_order(page)); } diff --git a/mm/rmap.c b/mm/rmap.c index 956305a8f5cc..1d138aada15c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1204,8 +1204,10 @@ static void page_remove_anon_compound_rmap(struct page *page) nr = HPAGE_PMD_NR; } - if (nr) + if (nr) { __mod_zone_page_state(page_zone(page), NR_ANON_PAGES, -nr); + deferred_split_huge_page(page); + } } /** @@ -1240,6 +1242,9 @@ void page_remove_rmap(struct page *page, bool compound) if (unlikely(PageMlocked(page))) clear_page_mlock(page); + if (PageTransCompound(page)) + deferred_split_huge_page(compound_head(page)); + /* * It would be tidy to reset the PageAnon mapping here, * but that might overwrite a racing page_add_anon_rmap -- 2.1.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-06-23 13:53 UTC|newest] Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top 2015-06-23 13:46 [PATCHv7 00/36] THP refcounting redesign Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 01/36] mm, proc: adjust PSS calculation Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 02/36] rmap: add argument to charge compound page Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 03/36] memcg: adjust to support new THP refcounting Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 04/36] mm, thp: adjust conditions when we can reuse the page on WP fault Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 05/36] mm: adjust FOLL_SPLIT for new refcounting Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 06/36] mm: handle PTE-mapped tail pages in gerneric fast gup implementaiton Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 07/36] thp, mlock: do not allow huge pages in mlocked area Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 08/36] khugepaged: ignore pmd tables with THP mapped with ptes Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 09/36] thp: rename split_huge_page_pmd() to split_huge_pmd() Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 10/36] mm, vmstats: new THP splitting event Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 11/36] mm: temporally mark THP broken Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 12/36] thp: drop all split_huge_page()-related code Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 13/36] mm: drop tail page refcounting Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 14/36] futex, thp: remove special case for THP in get_futex_key Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 15/36] ksm: prepare to new THP semantics Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 16/36] mm, thp: remove compound_lock Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 17/36] arm64, thp: remove infrastructure for handling splitting PMDs Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 18/36] arm, " Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 19/36] mips, " Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 20/36] powerpc, " Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 21/36] s390, " Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 22/36] sparc, " Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 23/36] tile, " Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 24/36] x86, " Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 25/36] mm, " Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 26/36] mm: rework mapcount accounting to enable 4k mapping of THPs Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 27/36] mm: differentiate page_mapped() from page_mapcount() for compound pages Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 28/36] mm, numa: skip PTE-mapped THP on numa fault Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 29/36] thp: implement split_huge_pmd() Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 30/36] thp: add option to setup migration entiries during PMD split Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 31/36] thp, mm: split_huge_page(): caller need to lock page Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 32/36] thp: reintroduce split_huge_page() Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 33/36] migrate_pages: try to split pages on qeueuing Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov [this message] 2015-06-23 13:46 ` [PATCHv7 34/36] thp: introduce deferred_split_huge_page() Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 35/36] mm: re-enable THP Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov 2015-06-23 13:46 ` [PATCHv7 36/36] thp: update documentation Kirill A. Shutemov 2015-06-23 13:46 ` Kirill A. Shutemov
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1435067206-92901-35-git-send-email-kirill.shutemov@linux.intel.com \ --to=kirill.shutemov@linux.intel.com \ --cc=aarcange@redhat.com \ --cc=akpm@linux-foundation.org \ --cc=aneesh.kumar@linux.vnet.ibm.com \ --cc=cl@gentwo.org \ --cc=dave.hansen@intel.com \ --cc=hannes@cmpxchg.org \ --cc=hughd@google.com \ --cc=jmarchan@redhat.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@suse.cz \ --cc=n-horiguchi@ah.jp.nec.com \ --cc=riel@redhat.com \ --cc=sasha.levin@oracle.com \ --cc=steve.capper@linaro.org \ --cc=vbabka@suse.cz \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.