From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751932AbcCHTu5 (ORCPT ); Tue, 8 Mar 2016 14:50:57 -0500 Received: from mx1.redhat.com ([209.132.183.28]:49708 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751547AbcCHTr5 (ORCPT ); Tue, 8 Mar 2016 14:47:57 -0500 From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= To: akpm@linux-foundation.org, , linux-mm@kvack.org Cc: Linus Torvalds , , Mel Gorman , "H. Peter Anvin" , Peter Zijlstra , Andrea Arcangeli , Johannes Weiner , Larry Woodman , Rik van Riel , Dave Airlie , Brendan Conoboy , Joe Donohue , Christophe Harle , Duncan Poole , Sherry Cheung , Subhash Gutti , John Hubbard , Mark Hairgrove , Lucien Dunning , Cameron Buschardt , Arvind Gopalakrishnan , Haggai Eran , Shachar Raindel , Liran Liss , Roland Dreier , Ben Sander , Greg Stoner , John Bridgman , Michael Mantor , Paul Blinzer , Leonid Shamis , Laurent Morichetti , Alexander Deucher , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v12 28/29] HMM: CPU page fault on migrated memory. Date: Tue, 8 Mar 2016 15:43:21 -0500 Message-Id: <1457469802-11850-29-git-send-email-jglisse@redhat.com> In-Reply-To: <1457469802-11850-1-git-send-email-jglisse@redhat.com> References: <1457469802-11850-1-git-send-email-jglisse@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When CPU try to access memory that have been migrated to device memory we have to copy it back to system memory. This patch implement the CPU page fault handler for special HMM pte swap entry. Signed-off-by: Jérôme Glisse --- mm/hmm.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 53 insertions(+), 1 deletion(-) diff --git a/mm/hmm.c b/mm/hmm.c index 4dcd98f..38943a7 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -472,7 +472,59 @@ int hmm_handle_cpu_fault(struct mm_struct *mm, pmd_t *pmdp, unsigned long addr, unsigned flags, pte_t orig_pte) { - return VM_FAULT_SIGBUS; + unsigned long start, end; + struct hmm_event event; + swp_entry_t entry; + struct hmm *hmm; + dma_addr_t dst; + pte_t new_pte; + int ret; + + /* First check for poisonous entry. */ + entry = pte_to_swp_entry(orig_pte); + if (is_hmm_entry_poisonous(entry)) + return VM_FAULT_SIGBUS; + + hmm = hmm_ref(mm->hmm); + if (!hmm) { + pte_t poison = swp_entry_to_pte(make_hmm_entry_poisonous()); + spinlock_t *ptl; + pte_t *ptep; + + /* Check if cpu pte is already updated. */ + ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl); + if (!pte_same(*ptep, orig_pte)) { + pte_unmap_unlock(ptep, ptl); + return 0; + } + set_pte_at(mm, addr, ptep, poison); + pte_unmap_unlock(ptep, ptl); + return VM_FAULT_SIGBUS; + } + + /* + * TODO we likely want to migrate more then one page at a time, we need + * to call into the device driver to get good hint on the range to copy + * back to system memory. + * + * For now just live with the one page at a time solution. + */ + start = addr & PAGE_MASK; + end = start + PAGE_SIZE; + hmm_event_init(&event, hmm, start, end, HMM_COPY_FROM_DEVICE); + + ret = hmm_migrate_back(hmm, &event, mm, vma, &new_pte, + &dst, start, end); + hmm_unref(hmm); + switch (ret) { + case 0: + return VM_FAULT_MAJOR; + case -ENOMEM: + return VM_FAULT_OOM; + case -EINVAL: + default: + return VM_FAULT_SIGBUS; + } } EXPORT_SYMBOL(hmm_handle_cpu_fault); -- 2.4.3 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qg0-f44.google.com (mail-qg0-f44.google.com [209.85.192.44]) by kanga.kvack.org (Postfix) with ESMTP id AF5E0828E6 for ; Tue, 8 Mar 2016 14:47:58 -0500 (EST) Received: by mail-qg0-f44.google.com with SMTP id u110so21996255qge.3 for ; Tue, 08 Mar 2016 11:47:58 -0800 (PST) Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id x7si4488496qhc.113.2016.03.08.11.47.57 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 08 Mar 2016 11:47:58 -0800 (PST) From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v12 28/29] HMM: CPU page fault on migrated memory. Date: Tue, 8 Mar 2016 15:43:21 -0500 Message-Id: <1457469802-11850-29-git-send-email-jglisse@redhat.com> In-Reply-To: <1457469802-11850-1-git-send-email-jglisse@redhat.com> References: <1457469802-11850-1-git-send-email-jglisse@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: owner-linux-mm@kvack.org List-ID: To: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Linus Torvalds , joro@8bytes.org, Mel Gorman , "H. Peter Anvin" , Peter Zijlstra , Andrea Arcangeli , Johannes Weiner , Larry Woodman , Rik van Riel , Dave Airlie , Brendan Conoboy , Joe Donohue , Christophe Harle , Duncan Poole , Sherry Cheung , Subhash Gutti , John Hubbard , Mark Hairgrove , Lucien Dunning , Cameron Buschardt , Arvind Gopalakrishnan , Haggai Eran , Shachar Raindel , Liran Liss , Roland Dreier , Ben Sander , Greg Stoner , John Bridgman , Michael Mantor , Paul Blinzer , Leonid Shamis , Laurent Morichetti , Alexander Deucher , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= When CPU try to access memory that have been migrated to device memory we have to copy it back to system memory. This patch implement the CPU page fault handler for special HMM pte swap entry. Signed-off-by: JA(C)rA'me Glisse --- mm/hmm.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 53 insertions(+), 1 deletion(-) diff --git a/mm/hmm.c b/mm/hmm.c index 4dcd98f..38943a7 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -472,7 +472,59 @@ int hmm_handle_cpu_fault(struct mm_struct *mm, pmd_t *pmdp, unsigned long addr, unsigned flags, pte_t orig_pte) { - return VM_FAULT_SIGBUS; + unsigned long start, end; + struct hmm_event event; + swp_entry_t entry; + struct hmm *hmm; + dma_addr_t dst; + pte_t new_pte; + int ret; + + /* First check for poisonous entry. */ + entry = pte_to_swp_entry(orig_pte); + if (is_hmm_entry_poisonous(entry)) + return VM_FAULT_SIGBUS; + + hmm = hmm_ref(mm->hmm); + if (!hmm) { + pte_t poison = swp_entry_to_pte(make_hmm_entry_poisonous()); + spinlock_t *ptl; + pte_t *ptep; + + /* Check if cpu pte is already updated. */ + ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl); + if (!pte_same(*ptep, orig_pte)) { + pte_unmap_unlock(ptep, ptl); + return 0; + } + set_pte_at(mm, addr, ptep, poison); + pte_unmap_unlock(ptep, ptl); + return VM_FAULT_SIGBUS; + } + + /* + * TODO we likely want to migrate more then one page at a time, we need + * to call into the device driver to get good hint on the range to copy + * back to system memory. + * + * For now just live with the one page at a time solution. + */ + start = addr & PAGE_MASK; + end = start + PAGE_SIZE; + hmm_event_init(&event, hmm, start, end, HMM_COPY_FROM_DEVICE); + + ret = hmm_migrate_back(hmm, &event, mm, vma, &new_pte, + &dst, start, end); + hmm_unref(hmm); + switch (ret) { + case 0: + return VM_FAULT_MAJOR; + case -ENOMEM: + return VM_FAULT_OOM; + case -EINVAL: + default: + return VM_FAULT_SIGBUS; + } } EXPORT_SYMBOL(hmm_handle_cpu_fault); -- 2.4.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org