From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966331AbdAFPsk (ORCPT ); Fri, 6 Jan 2017 10:48:40 -0500 Received: from mx1.redhat.com ([209.132.183.28]:58570 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935790AbdAFPqK (ORCPT ); Fri, 6 Jan 2017 10:46:10 -0500 From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= To: akpm@linux-foundation.org, , linux-mm@kvack.org Cc: John Hubbard , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [HMM v15 11/16] mm/hmm/migrate: support un-addressable ZONE_DEVICE page in migration Date: Fri, 6 Jan 2017 11:46:38 -0500 Message-Id: <1483721203-1678-12-git-send-email-jglisse@redhat.com> In-Reply-To: <1483721203-1678-1-git-send-email-jglisse@redhat.com> References: <1483721203-1678-1-git-send-email-jglisse@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Fri, 06 Jan 2017 15:46:10 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Allow to unmap and restore special swap entry of un-addressable ZONE_DEVICE memory. Signed-off-by: Jérôme Glisse --- mm/migrate.c | 11 ++++++++++- mm/rmap.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index 0ed24b1..5de87d5 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -40,6 +40,7 @@ #include #include #include +#include #include @@ -248,7 +249,15 @@ static int remove_migration_pte(struct page *new, struct vm_area_struct *vma, pte = arch_make_huge_pte(pte, vma, new, 0); } #endif - flush_dcache_page(new); + + if (unlikely(is_zone_device_page(new)) && !is_addressable_page(new)) { + entry = make_device_entry(new, pte_write(pte)); + pte = swp_entry_to_pte(entry); + if (pte_swp_soft_dirty(*ptep)) + pte = pte_mksoft_dirty(pte); + } else + flush_dcache_page(new); + set_pte_at(mm, addr, ptep, pte); if (PageHuge(new)) { diff --git a/mm/rmap.c b/mm/rmap.c index 91619fd..c7b0b54 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -61,6 +61,7 @@ #include #include #include +#include #include @@ -1454,6 +1455,52 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, goto out; } + if ((flags & TTU_MIGRATION) && is_zone_device_page(page)) { + swp_entry_t entry; + pte_t swp_pte; + pmd_t *pmdp; + + if (!dev_page_allow_migrate(page)) + goto out; + + pmdp = mm_find_pmd(mm, address); + if (!pmdp) + goto out; + + pte = pte_offset_map_lock(mm, pmdp, address, &ptl); + if (!pte) + goto out; + + pteval = ptep_get_and_clear(mm, address, pte); + if (pte_present(pteval) || pte_none(pteval)) { + set_pte_at(mm, address, pte, pteval); + goto out_unmap; + } + + entry = pte_to_swp_entry(pteval); + if (!is_device_entry(entry)) { + set_pte_at(mm, address, pte, pteval); + goto out_unmap; + } + + if (device_entry_to_page(entry) != page) { + set_pte_at(mm, address, pte, pteval); + goto out_unmap; + } + + /* + * Store the pfn of the page in a special migration + * pte. do_swap_page() will wait until the migration + * pte is removed and then restart fault handling. + */ + entry = make_migration_entry(page, 0); + swp_pte = swp_entry_to_pte(entry); + if (pte_soft_dirty(*pte)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + set_pte_at(mm, address, pte, swp_pte); + goto discard; + } + pte = page_check_address(page, mm, address, &ptl, PageTransCompound(page)); if (!pte) -- 2.4.3