From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751985AbdBIXHs convert rfc822-to-8bit (ORCPT ); Thu, 9 Feb 2017 18:07:48 -0500 Received: from tyo161.gate.nec.co.jp ([114.179.232.161]:50578 "EHLO tyo161.gate.nec.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751882AbdBIXHb (ORCPT ); Thu, 9 Feb 2017 18:07:31 -0500 From: Naoya Horiguchi To: Zi Yan CC: "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "kirill.shutemov@linux.intel.com" , "akpm@linux-foundation.org" , "minchan@kernel.org" , "vbabka@suse.cz" , "mgorman@techsingularity.net" , "khandual@linux.vnet.ibm.com" Subject: Re: [PATCH v3 08/14] mm: thp: enable thp migration in generic path Thread-Topic: [PATCH v3 08/14] mm: thp: enable thp migration in generic path Thread-Index: AQHSf8rz9HJDkuYLNECJcK2M3QlcKqFf06eAgABlA4CAAIKsgA== Date: Thu, 9 Feb 2017 23:04:43 +0000 Message-ID: <20170209230443.GA21865@hori1.linux.bs1.fc.nec.co.jp> References: <20170205161252.85004-1-zi.yan@sent.com> <20170205161252.85004-9-zi.yan@sent.com> <20170209091528.GB15649@hori1.linux.bs1.fc.nec.co.jp> <7AE21E4F-EEEB-4C24-8158-473770119436@cs.rutgers.edu> In-Reply-To: <7AE21E4F-EEEB-4C24-8158-473770119436@cs.rutgers.edu> Accept-Language: en-US, ja-JP Content-Language: ja-JP X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.128.101.19] Content-Type: text/plain; charset="iso-2022-jp" Content-ID: <0CBE09417F8BAD45BB44824F91BA7316@gisp.nec.co.jp> Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 09, 2017 at 09:17:01AM -0600, Zi Yan wrote: > On 9 Feb 2017, at 3:15, Naoya Horiguchi wrote: > > > On Sun, Feb 05, 2017 at 11:12:46AM -0500, Zi Yan wrote: > >> From: Naoya Horiguchi > >> > >> This patch adds thp migration's core code, including conversions > >> between a PMD entry and a swap entry, setting PMD migration entry, > >> removing PMD migration entry, and waiting on PMD migration entries. > >> > >> This patch makes it possible to support thp migration. > >> If you fail to allocate a destination page as a thp, you just split > >> the source thp as we do now, and then enter the normal page migration. > >> If you succeed to allocate destination thp, you enter thp migration. > >> Subsequent patches actually enable thp migration for each caller of > >> page migration by allowing its get_new_page() callback to > >> allocate thps. > >> > >> ChangeLog v1 -> v2: > >> - support pte-mapped thp, doubly-mapped thp > >> > >> Signed-off-by: Naoya Horiguchi > >> > >> ChangeLog v2 -> v3: > >> - use page_vma_mapped_walk() > >> > >> Signed-off-by: Zi Yan > >> --- > >> arch/x86/include/asm/pgtable_64.h | 2 + > >> include/linux/swapops.h | 70 +++++++++++++++++- > >> mm/huge_memory.c | 151 ++++++++++++++++++++++++++++++++++---- > >> mm/migrate.c | 29 +++++++- > >> mm/page_vma_mapped.c | 13 +++- > >> mm/pgtable-generic.c | 3 +- > >> mm/rmap.c | 14 +++- > >> 7 files changed, 259 insertions(+), 23 deletions(-) > >> > > ... > >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c > >> index 6893c47428b6..fd54bbdc16cf 100644 > >> --- a/mm/huge_memory.c > >> +++ b/mm/huge_memory.c > >> @@ -1613,20 +1613,51 @@ int __zap_huge_pmd_locked(struct mmu_gather *tlb, struct vm_area_struct *vma, > >> atomic_long_dec(&tlb->mm->nr_ptes); > >> tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); > >> } else { > >> - struct page *page = pmd_page(orig_pmd); > >> - page_remove_rmap(page, true); > >> - VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); > >> - VM_BUG_ON_PAGE(!PageHead(page), page); > >> - if (PageAnon(page)) { > >> - pgtable_t pgtable; > >> - pgtable = pgtable_trans_huge_withdraw(tlb->mm, pmd); > >> - pte_free(tlb->mm, pgtable); > >> - atomic_long_dec(&tlb->mm->nr_ptes); > >> - add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); > >> + struct page *page; > >> + int migration = 0; > >> + > >> + if (!is_pmd_migration_entry(orig_pmd)) { > >> + page = pmd_page(orig_pmd); > >> + VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); > >> + VM_BUG_ON_PAGE(!PageHead(page), page); > >> + page_remove_rmap(page, true); > > > >> + if (PageAnon(page)) { > >> + pgtable_t pgtable; > >> + > >> + pgtable = pgtable_trans_huge_withdraw(tlb->mm, > >> + pmd); > >> + pte_free(tlb->mm, pgtable); > >> + atomic_long_dec(&tlb->mm->nr_ptes); > >> + add_mm_counter(tlb->mm, MM_ANONPAGES, > >> + -HPAGE_PMD_NR); > >> + } else { > >> + if (arch_needs_pgtable_deposit()) > >> + zap_deposited_table(tlb->mm, pmd); > >> + add_mm_counter(tlb->mm, MM_FILEPAGES, > >> + -HPAGE_PMD_NR); > >> + } > > > > This block is exactly equal to the one in else block below, > > So you can factor out into some function. > > Of course, I will do that. > > > > >> } else { > >> - if (arch_needs_pgtable_deposit()) > >> - zap_deposited_table(tlb->mm, pmd); > >> - add_mm_counter(tlb->mm, MM_FILEPAGES, -HPAGE_PMD_NR); > >> + swp_entry_t entry; > >> + > >> + entry = pmd_to_swp_entry(orig_pmd); > >> + page = pfn_to_page(swp_offset(entry)); > > > >> + if (PageAnon(page)) { > >> + pgtable_t pgtable; > >> + > >> + pgtable = pgtable_trans_huge_withdraw(tlb->mm, > >> + pmd); > >> + pte_free(tlb->mm, pgtable); > >> + atomic_long_dec(&tlb->mm->nr_ptes); > >> + add_mm_counter(tlb->mm, MM_ANONPAGES, > >> + -HPAGE_PMD_NR); > >> + } else { > >> + if (arch_needs_pgtable_deposit()) > >> + zap_deposited_table(tlb->mm, pmd); > >> + add_mm_counter(tlb->mm, MM_FILEPAGES, > >> + -HPAGE_PMD_NR); > >> + } > > > >> + free_swap_and_cache(entry); /* waring in failure? */ > >> + migration = 1; > >> } > >> tlb_remove_page_size(tlb, page, HPAGE_PMD_SIZE); > >> } > >> @@ -2634,3 +2665,97 @@ static int __init split_huge_pages_debugfs(void) > >> } > >> late_initcall(split_huge_pages_debugfs); > >> #endif > >> + > >> +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION > >> +void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw, > >> + struct page *page) > >> +{ > >> + struct vm_area_struct *vma = pvmw->vma; > >> + struct mm_struct *mm = vma->vm_mm; > >> + unsigned long address = pvmw->address; > >> + pmd_t pmdval; > >> + swp_entry_t entry; > >> + > >> + if (pvmw->pmd && !pvmw->pte) { > >> + pmd_t pmdswp; > >> + > >> + mmu_notifier_invalidate_range_start(mm, address, > >> + address + HPAGE_PMD_SIZE); > > > > Don't you have to put mmu_notifier_invalidate_range_* outside this if block? > > I think I need to add mmu_notifier_invalidate_page() in else block. > > Because Kirill's page_vma_mapped_walk() iterates each PMD or PTE. > In set_pmd_migration_etnry(), if the page is PMD-mapped, it will be called once > with PMD, then mmu_notifier_invalidate_range_* can be used. On the other hand, > if the page is PTE-mapped, the function will be called 1~512 times depending > on how many PTEs are present. mmu_notifier_invalidate_range_* is not suitable. > mmu_notifier_invalidate_page() on the corresponding subpage should work. > Ah right, mmu_notifier_invalidate_page() is better for PTE-mapped thp. Thanks, Naoya