From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> To: Hugh Dickins <hughd@google.com>, Andrea Arcangeli <aarcange@redhat.com>, Andrew Morton <akpm@linux-foundation.org> Cc: Dave Hansen <dave.hansen@intel.com>, Vlastimil Babka <vbabka@suse.cz>, Christoph Lameter <cl@gentwo.org>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, Jerome Marchand <jmarchan@redhat.com>, Yang Shi <yang.shi@linaro.org>, Sasha Levin <sasha.levin@oracle.com>, Andres Lagar-Cavilla <andreslc@google.com>, Ning Qu <quning@gmail.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCHv7 05/29] rmap: support file thp Date: Sat, 16 Apr 2016 03:23:36 +0300 [thread overview] Message-ID: <1460766240-84565-6-git-send-email-kirill.shutemov@linux.intel.com> (raw) In-Reply-To: <1460766240-84565-1-git-send-email-kirill.shutemov@linux.intel.com> Naive approach: on mapping/unmapping the page as compound we update ->_mapcount on each 4k page. That's not efficient, but it's not obvious how we can optimize this. We can look into optimization later. PG_double_map optimization doesn't work for file pages since lifecycle of file pages is different comparing to anon pages: file page can be mapped again at any time. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- include/linux/rmap.h | 2 +- mm/huge_memory.c | 10 +++++++--- mm/memory.c | 4 ++-- mm/migrate.c | 2 +- mm/rmap.c | 48 +++++++++++++++++++++++++++++++++++------------- mm/util.c | 6 ++++++ 6 files changed, 52 insertions(+), 20 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 49eb4f8ebac9..5704f101b52e 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -165,7 +165,7 @@ void do_page_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long, int); void page_add_new_anon_rmap(struct page *, struct vm_area_struct *, unsigned long, bool); -void page_add_file_rmap(struct page *); +void page_add_file_rmap(struct page *, bool); void page_remove_rmap(struct page *, bool); void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a4014b484737..aab10c81de12 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3262,18 +3262,22 @@ static void __split_huge_page(struct page *page, struct list_head *list) int total_mapcount(struct page *page) { - int i, ret; + int i, compound, ret; VM_BUG_ON_PAGE(PageTail(page), page); if (likely(!PageCompound(page))) return atomic_read(&page->_mapcount) + 1; - ret = compound_mapcount(page); + compound = compound_mapcount(page); if (PageHuge(page)) - return ret; + return compound; + ret = compound; for (i = 0; i < HPAGE_PMD_NR; i++) ret += atomic_read(&page[i]._mapcount) + 1; + /* File pages has compound_mapcount included in _mapcount */ + if (!PageAnon(page)) + return ret - compound * HPAGE_PMD_NR; if (PageDoubleMap(page)) ret -= HPAGE_PMD_NR; return ret; diff --git a/mm/memory.c b/mm/memory.c index c31c52507956..49c55446576a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1438,7 +1438,7 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr, /* Ok, finally just insert the thing.. */ get_page(page); inc_mm_counter_fast(mm, mm_counter_file(page)); - page_add_file_rmap(page); + page_add_file_rmap(page, false); set_pte_at(mm, addr, pte, mk_pte(page, prot)); retval = 0; @@ -2901,7 +2901,7 @@ int alloc_set_pte(struct fault_env *fe, struct mem_cgroup *memcg, lru_cache_add_active_or_unevictable(page, vma); } else { inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page)); - page_add_file_rmap(page); + page_add_file_rmap(page, false); } set_pte_at(vma->vm_mm, fe->address, fe->pte, entry); diff --git a/mm/migrate.c b/mm/migrate.c index 3eafd17fb398..b8f8363df8da 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -170,7 +170,7 @@ static int remove_migration_pte(struct page *new, struct vm_area_struct *vma, } else if (PageAnon(new)) page_add_anon_rmap(new, vma, addr, false); else - page_add_file_rmap(new); + page_add_file_rmap(new, false); if (vma->vm_flags & VM_LOCKED && !PageTransCompound(new)) mlock_vma_page(new); diff --git a/mm/rmap.c b/mm/rmap.c index 7f8652720a25..76d8c9269ac6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1271,18 +1271,34 @@ void page_add_new_anon_rmap(struct page *page, * * The caller needs to hold the pte lock. */ -void page_add_file_rmap(struct page *page) +void page_add_file_rmap(struct page *page, bool compound) { + int i, nr = 1; + + VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); lock_page_memcg(page); - if (atomic_inc_and_test(&page->_mapcount)) { - __inc_zone_page_state(page, NR_FILE_MAPPED); - mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); + if (compound && PageTransHuge(page)) { + for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) { + if (atomic_inc_and_test(&page[i]._mapcount)) + nr++; + } + if (!atomic_inc_and_test(compound_mapcount_ptr(page))) + goto out; + } else { + if (!atomic_inc_and_test(&page->_mapcount)) + goto out; } + __mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, nr); + mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); +out: unlock_page_memcg(page); } -static void page_remove_file_rmap(struct page *page) +static void page_remove_file_rmap(struct page *page, bool compound) { + int i, nr = 1; + + VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); lock_page_memcg(page); /* Hugepages are not counted in NR_FILE_MAPPED for now. */ @@ -1293,15 +1309,24 @@ static void page_remove_file_rmap(struct page *page) } /* page still mapped by someone else? */ - if (!atomic_add_negative(-1, &page->_mapcount)) - goto out; + if (compound && PageTransHuge(page)) { + for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) { + if (atomic_add_negative(-1, &page[i]._mapcount)) + nr++; + } + if (!atomic_add_negative(-1, compound_mapcount_ptr(page))) + goto out; + } else { + if (!atomic_add_negative(-1, &page->_mapcount)) + goto out; + } /* * We use the irq-unsafe __{inc|mod}_zone_page_stat because * these counters are not modified in interrupt context, and * pte lock(a spinlock) is held, which implies preemption disabled. */ - __dec_zone_page_state(page, NR_FILE_MAPPED); + __mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, -nr); mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); if (unlikely(PageMlocked(page))) @@ -1357,11 +1382,8 @@ static void page_remove_anon_compound_rmap(struct page *page) */ void page_remove_rmap(struct page *page, bool compound) { - if (!PageAnon(page)) { - VM_BUG_ON_PAGE(compound && !PageHuge(page), page); - page_remove_file_rmap(page); - return; - } + if (!PageAnon(page)) + return page_remove_file_rmap(page, compound); if (compound) return page_remove_anon_compound_rmap(page); diff --git a/mm/util.c b/mm/util.c index 6cc81e7b8705..b7ac1d708cb0 100644 --- a/mm/util.c +++ b/mm/util.c @@ -386,6 +386,12 @@ int __page_mapcount(struct page *page) int ret; ret = atomic_read(&page->_mapcount) + 1; + /* + * For file THP page->_mapcount contains total number of mapping + * of the page: no need to look into compound_mapcount. + */ + if (!PageAnon(page) && !PageHuge(page)) + return ret; page = compound_head(page); ret += atomic_read(compound_mapcount_ptr(page)) + 1; if (PageDoubleMap(page)) -- 2.8.0.rc3
WARNING: multiple messages have this Message-ID (diff)
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> To: Hugh Dickins <hughd@google.com>, Andrea Arcangeli <aarcange@redhat.com>, Andrew Morton <akpm@linux-foundation.org> Cc: Dave Hansen <dave.hansen@intel.com>, Vlastimil Babka <vbabka@suse.cz>, Christoph Lameter <cl@gentwo.org>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, Jerome Marchand <jmarchan@redhat.com>, Yang Shi <yang.shi@linaro.org>, Sasha Levin <sasha.levin@oracle.com>, Andres Lagar-Cavilla <andreslc@google.com>, Ning Qu <quning@gmail.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCHv7 05/29] rmap: support file thp Date: Sat, 16 Apr 2016 03:23:36 +0300 [thread overview] Message-ID: <1460766240-84565-6-git-send-email-kirill.shutemov@linux.intel.com> (raw) In-Reply-To: <1460766240-84565-1-git-send-email-kirill.shutemov@linux.intel.com> Naive approach: on mapping/unmapping the page as compound we update ->_mapcount on each 4k page. That's not efficient, but it's not obvious how we can optimize this. We can look into optimization later. PG_double_map optimization doesn't work for file pages since lifecycle of file pages is different comparing to anon pages: file page can be mapped again at any time. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- include/linux/rmap.h | 2 +- mm/huge_memory.c | 10 +++++++--- mm/memory.c | 4 ++-- mm/migrate.c | 2 +- mm/rmap.c | 48 +++++++++++++++++++++++++++++++++++------------- mm/util.c | 6 ++++++ 6 files changed, 52 insertions(+), 20 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 49eb4f8ebac9..5704f101b52e 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -165,7 +165,7 @@ void do_page_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long, int); void page_add_new_anon_rmap(struct page *, struct vm_area_struct *, unsigned long, bool); -void page_add_file_rmap(struct page *); +void page_add_file_rmap(struct page *, bool); void page_remove_rmap(struct page *, bool); void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a4014b484737..aab10c81de12 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3262,18 +3262,22 @@ static void __split_huge_page(struct page *page, struct list_head *list) int total_mapcount(struct page *page) { - int i, ret; + int i, compound, ret; VM_BUG_ON_PAGE(PageTail(page), page); if (likely(!PageCompound(page))) return atomic_read(&page->_mapcount) + 1; - ret = compound_mapcount(page); + compound = compound_mapcount(page); if (PageHuge(page)) - return ret; + return compound; + ret = compound; for (i = 0; i < HPAGE_PMD_NR; i++) ret += atomic_read(&page[i]._mapcount) + 1; + /* File pages has compound_mapcount included in _mapcount */ + if (!PageAnon(page)) + return ret - compound * HPAGE_PMD_NR; if (PageDoubleMap(page)) ret -= HPAGE_PMD_NR; return ret; diff --git a/mm/memory.c b/mm/memory.c index c31c52507956..49c55446576a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1438,7 +1438,7 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr, /* Ok, finally just insert the thing.. */ get_page(page); inc_mm_counter_fast(mm, mm_counter_file(page)); - page_add_file_rmap(page); + page_add_file_rmap(page, false); set_pte_at(mm, addr, pte, mk_pte(page, prot)); retval = 0; @@ -2901,7 +2901,7 @@ int alloc_set_pte(struct fault_env *fe, struct mem_cgroup *memcg, lru_cache_add_active_or_unevictable(page, vma); } else { inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page)); - page_add_file_rmap(page); + page_add_file_rmap(page, false); } set_pte_at(vma->vm_mm, fe->address, fe->pte, entry); diff --git a/mm/migrate.c b/mm/migrate.c index 3eafd17fb398..b8f8363df8da 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -170,7 +170,7 @@ static int remove_migration_pte(struct page *new, struct vm_area_struct *vma, } else if (PageAnon(new)) page_add_anon_rmap(new, vma, addr, false); else - page_add_file_rmap(new); + page_add_file_rmap(new, false); if (vma->vm_flags & VM_LOCKED && !PageTransCompound(new)) mlock_vma_page(new); diff --git a/mm/rmap.c b/mm/rmap.c index 7f8652720a25..76d8c9269ac6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1271,18 +1271,34 @@ void page_add_new_anon_rmap(struct page *page, * * The caller needs to hold the pte lock. */ -void page_add_file_rmap(struct page *page) +void page_add_file_rmap(struct page *page, bool compound) { + int i, nr = 1; + + VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); lock_page_memcg(page); - if (atomic_inc_and_test(&page->_mapcount)) { - __inc_zone_page_state(page, NR_FILE_MAPPED); - mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); + if (compound && PageTransHuge(page)) { + for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) { + if (atomic_inc_and_test(&page[i]._mapcount)) + nr++; + } + if (!atomic_inc_and_test(compound_mapcount_ptr(page))) + goto out; + } else { + if (!atomic_inc_and_test(&page->_mapcount)) + goto out; } + __mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, nr); + mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); +out: unlock_page_memcg(page); } -static void page_remove_file_rmap(struct page *page) +static void page_remove_file_rmap(struct page *page, bool compound) { + int i, nr = 1; + + VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); lock_page_memcg(page); /* Hugepages are not counted in NR_FILE_MAPPED for now. */ @@ -1293,15 +1309,24 @@ static void page_remove_file_rmap(struct page *page) } /* page still mapped by someone else? */ - if (!atomic_add_negative(-1, &page->_mapcount)) - goto out; + if (compound && PageTransHuge(page)) { + for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) { + if (atomic_add_negative(-1, &page[i]._mapcount)) + nr++; + } + if (!atomic_add_negative(-1, compound_mapcount_ptr(page))) + goto out; + } else { + if (!atomic_add_negative(-1, &page->_mapcount)) + goto out; + } /* * We use the irq-unsafe __{inc|mod}_zone_page_stat because * these counters are not modified in interrupt context, and * pte lock(a spinlock) is held, which implies preemption disabled. */ - __dec_zone_page_state(page, NR_FILE_MAPPED); + __mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, -nr); mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED); if (unlikely(PageMlocked(page))) @@ -1357,11 +1382,8 @@ static void page_remove_anon_compound_rmap(struct page *page) */ void page_remove_rmap(struct page *page, bool compound) { - if (!PageAnon(page)) { - VM_BUG_ON_PAGE(compound && !PageHuge(page), page); - page_remove_file_rmap(page); - return; - } + if (!PageAnon(page)) + return page_remove_file_rmap(page, compound); if (compound) return page_remove_anon_compound_rmap(page); diff --git a/mm/util.c b/mm/util.c index 6cc81e7b8705..b7ac1d708cb0 100644 --- a/mm/util.c +++ b/mm/util.c @@ -386,6 +386,12 @@ int __page_mapcount(struct page *page) int ret; ret = atomic_read(&page->_mapcount) + 1; + /* + * For file THP page->_mapcount contains total number of mapping + * of the page: no need to look into compound_mapcount. + */ + if (!PageAnon(page) && !PageHuge(page)) + return ret; page = compound_head(page); ret += atomic_read(compound_mapcount_ptr(page)) + 1; if (PageDoubleMap(page)) -- 2.8.0.rc3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-04-16 0:24 UTC|newest] Thread overview: 82+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-04-16 0:23 [PATCHv7 00/29] THP-enabled tmpfs/shmem using compound pages Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 01/29] thp, mlock: update unevictable-lru.txt Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 02/29] mm: do not pass mm_struct into handle_mm_fault Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 03/29] mm: introduce fault_env Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 04/29] mm: postpone page table allocation until we have page to map Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov [this message] 2016-04-16 0:23 ` [PATCHv7 05/29] rmap: support file thp Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 06/29] mm: introduce do_set_pmd() Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 07/29] thp, vmstats: add counters for huge file pages Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 08/29] thp: support file pages in zap_huge_pmd() Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 09/29] thp: handle file pages in split_huge_pmd() Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 10/29] thp: handle file COW faults Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 11/29] thp: skip file huge pmd on copy_huge_pmd() Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 12/29] thp: prepare change_huge_pmd() for file thp Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 13/29] thp: run vma_adjust_trans_huge() outside i_mmap_rwsem Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 14/29] thp: file pages support for split_huge_page() Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 15/29] thp, mlock: do not mlock PTE-mapped file huge pages Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 16/29] vmscan: split file huge pages before paging them out Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 17/29] page-flags: relax policy for PG_mappedtodisk and PG_reclaim Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 18/29] radix-tree: implement radix_tree_maybe_preload_order() Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 19/29] filemap: prepare find and delete operations for huge pages Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 20/29] truncate: handle file thp Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 21/29] mm, rmap: account shmem thp pages Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 22/29] shmem: prepare huge= mount option and sysfs knob Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 23/29] shmem: get_unmapped_area align huge page Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 24/29] shmem: add huge pages support Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 25/29] shmem, thp: respect MADV_{NO,}HUGEPAGE for file mappings Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 26/29] thp: update Documentation/vm/transhuge.txt Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 27/29] thp: extract khugepaged from mm/huge_memory.c Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:23 ` [PATCHv7 28/29] khugepaged: move up_read(mmap_sem) out of khugepaged_alloc_page() Kirill A. Shutemov 2016-04-16 0:23 ` Kirill A. Shutemov 2016-04-16 0:24 ` [PATCHv7 29/29] khugepaged: add support of collapse for tmpfs/shmem pages Kirill A. Shutemov 2016-04-16 0:24 ` Kirill A. Shutemov 2016-04-18 22:55 ` [PATCHv7 00/29] THP-enabled tmpfs/shmem using compound pages Shi, Yang 2016-04-18 22:55 ` Shi, Yang 2016-04-19 14:33 ` Jerome Marchand 2016-04-19 16:11 ` Shi, Yang 2016-04-19 16:11 ` Shi, Yang 2016-04-19 16:50 ` Andrea Arcangeli 2016-04-19 16:50 ` Andrea Arcangeli 2016-04-19 17:07 ` Andres Lagar-Cavilla 2016-04-19 17:07 ` Andres Lagar-Cavilla 2016-04-24 5:46 ` Wincy Van 2016-04-24 5:46 ` Wincy Van 2016-04-25 13:30 ` Andres Lagar-Cavilla 2016-04-25 13:30 ` Andres Lagar-Cavilla 2016-04-26 14:02 ` Wincy Van 2016-04-26 14:02 ` Wincy Van 2016-04-27 15:48 ` Andrea Arcangeli 2016-04-27 15:48 ` Andrea Arcangeli 2016-04-19 23:48 ` Shi, Yang 2016-04-19 23:48 ` Shi, Yang 2016-04-20 8:31 ` Hugh Dickins 2016-04-20 8:31 ` Hugh Dickins 2016-04-20 8:31 ` Hugh Dickins
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1460766240-84565-6-git-send-email-kirill.shutemov@linux.intel.com \ --to=kirill.shutemov@linux.intel.com \ --cc=aarcange@redhat.com \ --cc=akpm@linux-foundation.org \ --cc=andreslc@google.com \ --cc=cl@gentwo.org \ --cc=dave.hansen@intel.com \ --cc=hughd@google.com \ --cc=jmarchan@redhat.com \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=n-horiguchi@ah.jp.nec.com \ --cc=quning@gmail.com \ --cc=sasha.levin@oracle.com \ --cc=vbabka@suse.cz \ --cc=yang.shi@linaro.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.