From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751944AbcCHTvr (ORCPT ); Tue, 8 Mar 2016 14:51:47 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37502 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751476AbcCHTrl (ORCPT ); Tue, 8 Mar 2016 14:47:41 -0500 From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= To: akpm@linux-foundation.org, , linux-mm@kvack.org Cc: Linus Torvalds , , Mel Gorman , "H. Peter Anvin" , Peter Zijlstra , Andrea Arcangeli , Johannes Weiner , Larry Woodman , Rik van Riel , Dave Airlie , Brendan Conoboy , Joe Donohue , Christophe Harle , Duncan Poole , Sherry Cheung , Subhash Gutti , John Hubbard , Mark Hairgrove , Lucien Dunning , Cameron Buschardt , Arvind Gopalakrishnan , Haggai Eran , Shachar Raindel , Liran Liss , Roland Dreier , Ben Sander , Greg Stoner , John Bridgman , Michael Mantor , Paul Blinzer , Leonid Shamis , Laurent Morichetti , Alexander Deucher , Jerome Glisse , Jatin Kumar Subject: [PATCH v12 23/29] HMM: new callback for copying memory from and to device memory v2. Date: Tue, 8 Mar 2016 15:43:16 -0500 Message-Id: <1457469802-11850-24-git-send-email-jglisse@redhat.com> In-Reply-To: <1457469802-11850-1-git-send-email-jglisse@redhat.com> References: <1457469802-11850-1-git-send-email-jglisse@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jerome Glisse This patch only adds the new callback device driver must implement to copy memory from and to device memory. Changed since v1: - Pass down the vma to the copy function. Signed-off-by: Jérôme Glisse Signed-off-by: Sherry Cheung Signed-off-by: Subhash Gutti Signed-off-by: Mark Hairgrove Signed-off-by: John Hubbard Signed-off-by: Jatin Kumar --- include/linux/hmm.h | 105 ++++++++++++++++++++++++++++++++++++++++++++++++++++ mm/hmm.c | 2 + 2 files changed, 107 insertions(+) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 7c66513..9fbfc07 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -65,6 +65,8 @@ enum hmm_etype { HMM_DEVICE_RFAULT, HMM_DEVICE_WFAULT, HMM_WRITE_PROTECT, + HMM_COPY_FROM_DEVICE, + HMM_COPY_TO_DEVICE, }; /* struct hmm_event - memory event information. @@ -170,6 +172,109 @@ struct hmm_device_ops { */ int (*update)(struct hmm_mirror *mirror, struct hmm_event *event); + + /* copy_from_device() - copy from device memory to system memory. + * + * @mirror: The mirror that link process address space with the device. + * @event: The event that triggered the copy. + * @dst: Array containing hmm_pte of destination memory. + * @start: Start address of the range (sub-range of event) to copy. + * @end: End address of the range (sub-range of event) to copy. + * Returns: 0 on success, error code otherwise {-ENOMEM, -EIO}. + * + * Called when migrating memory from device memory to system memory. + * The dst array contains valid DMA address for the device of the page + * to copy to (or pfn of page if hmm_device.device == NULL). + * + * If event.etype == HMM_FORK then device driver only need to schedule + * a copy to the system pages given in the dst hmm_pte array. Do not + * update the device page, and do not pause/stop the device threads + * that are using this address space. Just copy memory. + * + * If event.type == HMM_COPY_FROM_DEVICE then device driver must first + * write protect the range then schedule the copy, then update its page + * table to use the new system memory given the dst array. Some device + * can perform all this in an atomic fashion from device point of view. + * The device driver must also free the device memory once the copy is + * done. + * + * Device driver must not fail lightly, any failure result in device + * process being kill and CPU page table set to HWPOISON entry. + * + * Note that device driver must clear the valid bit of the dst entry it + * failed to copy. + * + * On failure the mirror will be kill by HMM which will do a HMM_MUNMAP + * invalidation of all the memory when this happen the device driver + * can free the device memory. + * + * Note also that there can be hole in the range being copied ie some + * entry of dst array will not have the valid bit set, device driver + * must simply ignore non valid entry. + * + * Finaly device driver must set the dirty bit for each page that was + * modified since it was copied inside the device memory. This must be + * conservative ie if device can not determine that with certainty then + * it must set the dirty bit unconditionally. + * + * Return 0 on success, error value otherwise : + * -ENOMEM Not enough memory for performing the operation. + * -EIO Some input/output error with the device. + * + * All other return value trigger warning and are transformed to -EIO. + */ + int (*copy_from_device)(struct hmm_mirror *mirror, + const struct hmm_event *event, + dma_addr_t *dst, + unsigned long start, + unsigned long end); + + /* copy_to_device() - copy to device memory from system memory. + * + * @mirror: The mirror that link process address space with the device. + * @event: The event that triggered the copy. + * @vma: The vma corresponding to the range. + * @dst: Array containing hmm_pte of destination memory. + * @start: Start address of the range (sub-range of event) to copy. + * @end: End address of the range (sub-range of event) to copy. + * Returns: 0 on success, error code otherwise {-ENOMEM, -EIO}. + * + * Called when migrating memory from system memory to device memory. + * The dst array is empty, all of its entry are equal to zero. Device + * driver must allocate the device memory and populate each entry using + * hmm_pte_from_device_pfn() only the valid device bit and hardware + * specific bit will be preserve (write and dirty will be taken from + * the original entry inside the mirror page table). It is advice to + * set the device pfn to match the physical address of device memory + * being use. The event.etype will be equals to HMM_COPY_TO_DEVICE. + * + * Device driver that can atomically copy a page and update its page + * table entry to point to the device memory can do that. Partial + * failure is allowed, entry that have not been migrated must have + * the HMM_PTE_VALID_DEV bit clear inside the dst array. HMM will + * update the CPU page table of failed entry to point back to the + * system page. + * + * Note that device driver is responsible for allocating and freeing + * the device memory and properly updating to dst array entry with + * the allocated device memory. + * + * Return 0 on success, error value otherwise : + * -ENOMEM Not enough memory for performing the operation. + * -EIO Some input/output error with the device. + * + * All other return value trigger warning and are transformed to -EIO. + * Errors means that the migration is aborted. So in case of partial + * failure if device do not want to fully abort it must return 0. + * Device driver can update device page table only if it knows it will + * not return failure. + */ + int (*copy_to_device)(struct hmm_mirror *mirror, + const struct hmm_event *event, + struct vm_area_struct *vma, + dma_addr_t *dst, + unsigned long start, + unsigned long end); }; diff --git a/mm/hmm.c b/mm/hmm.c index 9455443..d26abe4 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -78,6 +78,8 @@ static inline int hmm_event_init(struct hmm_event *event, switch (etype) { case HMM_DEVICE_RFAULT: case HMM_DEVICE_WFAULT: + case HMM_COPY_TO_DEVICE: + case HMM_COPY_FROM_DEVICE: break; case HMM_FORK: case HMM_WRITE_PROTECT: -- 2.4.3 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk0-f173.google.com (mail-qk0-f173.google.com [209.85.220.173]) by kanga.kvack.org (Postfix) with ESMTP id 6DC29828E6 for ; Tue, 8 Mar 2016 14:47:42 -0500 (EST) Received: by mail-qk0-f173.google.com with SMTP id s68so10692885qkh.3 for ; Tue, 08 Mar 2016 11:47:42 -0800 (PST) Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id q89si4497516qkl.83.2016.03.08.11.47.41 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 08 Mar 2016 11:47:41 -0800 (PST) From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v12 23/29] HMM: new callback for copying memory from and to device memory v2. Date: Tue, 8 Mar 2016 15:43:16 -0500 Message-Id: <1457469802-11850-24-git-send-email-jglisse@redhat.com> In-Reply-To: <1457469802-11850-1-git-send-email-jglisse@redhat.com> References: <1457469802-11850-1-git-send-email-jglisse@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: owner-linux-mm@kvack.org List-ID: To: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Linus Torvalds , joro@8bytes.org, Mel Gorman , "H. Peter Anvin" , Peter Zijlstra , Andrea Arcangeli , Johannes Weiner , Larry Woodman , Rik van Riel , Dave Airlie , Brendan Conoboy , Joe Donohue , Christophe Harle , Duncan Poole , Sherry Cheung , Subhash Gutti , John Hubbard , Mark Hairgrove , Lucien Dunning , Cameron Buschardt , Arvind Gopalakrishnan , Haggai Eran , Shachar Raindel , Liran Liss , Roland Dreier , Ben Sander , Greg Stoner , John Bridgman , Michael Mantor , Paul Blinzer , Leonid Shamis , Laurent Morichetti , Alexander Deucher , Jerome Glisse , Jatin Kumar From: Jerome Glisse This patch only adds the new callback device driver must implement to copy memory from and to device memory. Changed since v1: - Pass down the vma to the copy function. Signed-off-by: JA(C)rA'me Glisse Signed-off-by: Sherry Cheung Signed-off-by: Subhash Gutti Signed-off-by: Mark Hairgrove Signed-off-by: John Hubbard Signed-off-by: Jatin Kumar --- include/linux/hmm.h | 105 ++++++++++++++++++++++++++++++++++++++++++++++++++++ mm/hmm.c | 2 + 2 files changed, 107 insertions(+) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 7c66513..9fbfc07 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -65,6 +65,8 @@ enum hmm_etype { HMM_DEVICE_RFAULT, HMM_DEVICE_WFAULT, HMM_WRITE_PROTECT, + HMM_COPY_FROM_DEVICE, + HMM_COPY_TO_DEVICE, }; /* struct hmm_event - memory event information. @@ -170,6 +172,109 @@ struct hmm_device_ops { */ int (*update)(struct hmm_mirror *mirror, struct hmm_event *event); + + /* copy_from_device() - copy from device memory to system memory. + * + * @mirror: The mirror that link process address space with the device. + * @event: The event that triggered the copy. + * @dst: Array containing hmm_pte of destination memory. + * @start: Start address of the range (sub-range of event) to copy. + * @end: End address of the range (sub-range of event) to copy. + * Returns: 0 on success, error code otherwise {-ENOMEM, -EIO}. + * + * Called when migrating memory from device memory to system memory. + * The dst array contains valid DMA address for the device of the page + * to copy to (or pfn of page if hmm_device.device == NULL). + * + * If event.etype == HMM_FORK then device driver only need to schedule + * a copy to the system pages given in the dst hmm_pte array. Do not + * update the device page, and do not pause/stop the device threads + * that are using this address space. Just copy memory. + * + * If event.type == HMM_COPY_FROM_DEVICE then device driver must first + * write protect the range then schedule the copy, then update its page + * table to use the new system memory given the dst array. Some device + * can perform all this in an atomic fashion from device point of view. + * The device driver must also free the device memory once the copy is + * done. + * + * Device driver must not fail lightly, any failure result in device + * process being kill and CPU page table set to HWPOISON entry. + * + * Note that device driver must clear the valid bit of the dst entry it + * failed to copy. + * + * On failure the mirror will be kill by HMM which will do a HMM_MUNMAP + * invalidation of all the memory when this happen the device driver + * can free the device memory. + * + * Note also that there can be hole in the range being copied ie some + * entry of dst array will not have the valid bit set, device driver + * must simply ignore non valid entry. + * + * Finaly device driver must set the dirty bit for each page that was + * modified since it was copied inside the device memory. This must be + * conservative ie if device can not determine that with certainty then + * it must set the dirty bit unconditionally. + * + * Return 0 on success, error value otherwise : + * -ENOMEM Not enough memory for performing the operation. + * -EIO Some input/output error with the device. + * + * All other return value trigger warning and are transformed to -EIO. + */ + int (*copy_from_device)(struct hmm_mirror *mirror, + const struct hmm_event *event, + dma_addr_t *dst, + unsigned long start, + unsigned long end); + + /* copy_to_device() - copy to device memory from system memory. + * + * @mirror: The mirror that link process address space with the device. + * @event: The event that triggered the copy. + * @vma: The vma corresponding to the range. + * @dst: Array containing hmm_pte of destination memory. + * @start: Start address of the range (sub-range of event) to copy. + * @end: End address of the range (sub-range of event) to copy. + * Returns: 0 on success, error code otherwise {-ENOMEM, -EIO}. + * + * Called when migrating memory from system memory to device memory. + * The dst array is empty, all of its entry are equal to zero. Device + * driver must allocate the device memory and populate each entry using + * hmm_pte_from_device_pfn() only the valid device bit and hardware + * specific bit will be preserve (write and dirty will be taken from + * the original entry inside the mirror page table). It is advice to + * set the device pfn to match the physical address of device memory + * being use. The event.etype will be equals to HMM_COPY_TO_DEVICE. + * + * Device driver that can atomically copy a page and update its page + * table entry to point to the device memory can do that. Partial + * failure is allowed, entry that have not been migrated must have + * the HMM_PTE_VALID_DEV bit clear inside the dst array. HMM will + * update the CPU page table of failed entry to point back to the + * system page. + * + * Note that device driver is responsible for allocating and freeing + * the device memory and properly updating to dst array entry with + * the allocated device memory. + * + * Return 0 on success, error value otherwise : + * -ENOMEM Not enough memory for performing the operation. + * -EIO Some input/output error with the device. + * + * All other return value trigger warning and are transformed to -EIO. + * Errors means that the migration is aborted. So in case of partial + * failure if device do not want to fully abort it must return 0. + * Device driver can update device page table only if it knows it will + * not return failure. + */ + int (*copy_to_device)(struct hmm_mirror *mirror, + const struct hmm_event *event, + struct vm_area_struct *vma, + dma_addr_t *dst, + unsigned long start, + unsigned long end); }; diff --git a/mm/hmm.c b/mm/hmm.c index 9455443..d26abe4 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -78,6 +78,8 @@ static inline int hmm_event_init(struct hmm_event *event, switch (etype) { case HMM_DEVICE_RFAULT: case HMM_DEVICE_WFAULT: + case HMM_COPY_TO_DEVICE: + case HMM_COPY_FROM_DEVICE: break; case HMM_FORK: case HMM_WRITE_PROTECT: -- 2.4.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org