All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ralph Campbell <rcampbell@nvidia.com>
To: "Christoph Hellwig" <hch@lst.de>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Bharata B Rao" <bharata@linux.ibm.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Ben Skeggs" <bskeggs@redhat.com>
Cc: Jerome Glisse <jglisse@redhat.com>, <kvm-ppc@vger.kernel.org>,
	<amd-gfx@lists.freedesktop.org>,
	<dri-devel@lists.freedesktop.org>,
	<nouveau@lists.freedesktop.org>, <linux-mm@kvack.org>
Subject: Re: [PATCH 2/4] mm: handle multiple owners of device private pages in migrate_vma
Date: Mon, 16 Mar 2020 14:43:13 -0700	[thread overview]
Message-ID: <6a110540-f954-9f0f-e785-7365135d2934@nvidia.com> (raw)
In-Reply-To: <20200316193216.920734-3-hch@lst.de>


On 3/16/20 12:32 PM, Christoph Hellwig wrote:
> Add a new src_owner field to struct migrate_vma.  If the field is set,
> only device private pages with page->pgmap->owner equal to that field
> are migrated.  If the field is not set only "normal" pages are migrated.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache coherent with CPU")

When migrating to device private memory, setting the src_owner lets the caller
know about source pages that are already migrated and skips pages migrated to a
different device similar to pages swapped out an actual swap device.
But, it prevents normal pages from being migrated to device private memory.
It can still be useful for the driver to know that a page is owned by a
different device if it has a device to device way of migrating data.
nouveau_dmem_migrate_vma() isn't setting args.src_owner so only normal pages
will be migrated which I guess is OK for now since nouveau doesn't handle
direct GPU to GPU migration currently.

When migrating device private memory to system memory due to a CPU fault,
the source page should be the device's device private struct page so if it
isn't, then it does make sense to not migrate whatever normal page is there.
nouveau_dmem_migrate_to_ram() sets src_owner so this case looks OK.

Just had to think this through.
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>

> ---
>   arch/powerpc/kvm/book3s_hv_uvmem.c     | 1 +
>   drivers/gpu/drm/nouveau/nouveau_dmem.c | 1 +
>   include/linux/migrate.h                | 8 ++++++++
>   mm/migrate.c                           | 9 ++++++---
>   4 files changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 67fefb03b9b7..f44f6b27950f 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -563,6 +563,7 @@ kvmppc_svm_page_out(struct vm_area_struct *vma, unsigned long start,
>   	mig.end = end;
>   	mig.src = &src_pfn;
>   	mig.dst = &dst_pfn;
> +	mig.src_owner = &kvmppc_uvmem_pgmap;
>   
>   	mutex_lock(&kvm->arch.uvmem_lock);
>   	/* The requested page is already paged-out, nothing to do */
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index a4682272586e..0e36345d395c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -176,6 +176,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
>   		.end		= vmf->address + PAGE_SIZE,
>   		.src		= &src,
>   		.dst		= &dst,
> +		.src_owner	= drm->dev,
>   	};
>   
>   	/*
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 72120061b7d4..3e546cbf03dd 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -196,6 +196,14 @@ struct migrate_vma {
>   	unsigned long		npages;
>   	unsigned long		start;
>   	unsigned long		end;
> +
> +	/*
> +	 * Set to the owner value also stored in page->pgmap->owner for
> +	 * migrating out of device private memory.  If set only device
> +	 * private pages with this owner are migrated.  If not set
> +	 * device private pages are not migrated at all.
> +	 */
> +	void			*src_owner;
>   };
>   
>   int migrate_vma_setup(struct migrate_vma *args);
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b1092876e537..7605d2c23433 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2241,7 +2241,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   	arch_enter_lazy_mmu_mode();
>   
>   	for (; addr < end; addr += PAGE_SIZE, ptep++) {
> -		unsigned long mpfn, pfn;
> +		unsigned long mpfn = 0, pfn;
>   		struct page *page;
>   		swp_entry_t entry;
>   		pte_t pte;
> @@ -2255,8 +2255,6 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   		}
>   
>   		if (!pte_present(pte)) {
> -			mpfn = 0;
> -
>   			/*
>   			 * Only care about unaddressable device page special
>   			 * page table entry. Other special swap entries are not
> @@ -2267,11 +2265,16 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   				goto next;
>   
>   			page = device_private_entry_to_page(entry);
> +			if (page->pgmap->owner != migrate->src_owner)
> +				goto next;
> +
>   			mpfn = migrate_pfn(page_to_pfn(page)) |
>   					MIGRATE_PFN_MIGRATE;
>   			if (is_write_device_private_entry(entry))
>   				mpfn |= MIGRATE_PFN_WRITE;
>   		} else {
> +			if (migrate->src_owner)
> +				goto next;
>   			pfn = pte_pfn(pte);
>   			if (is_zero_pfn(pfn)) {
>   				mpfn = MIGRATE_PFN_MIGRATE;
> 


WARNING: multiple messages have this Message-ID (diff)
From: Ralph Campbell <rcampbell@nvidia.com>
To: "Christoph Hellwig" <hch@lst.de>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Bharata B Rao" <bharata@linux.ibm.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Ben Skeggs" <bskeggs@redhat.com>
Cc: Jerome Glisse <jglisse@redhat.com>,
	kvm-ppc@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 2/4] mm: handle multiple owners of device private pages in migrate_vma
Date: Mon, 16 Mar 2020 14:43:13 -0700	[thread overview]
Message-ID: <6a110540-f954-9f0f-e785-7365135d2934@nvidia.com> (raw)
In-Reply-To: <20200316193216.920734-3-hch@lst.de>


On 3/16/20 12:32 PM, Christoph Hellwig wrote:
> Add a new src_owner field to struct migrate_vma.  If the field is set,
> only device private pages with page->pgmap->owner equal to that field
> are migrated.  If the field is not set only "normal" pages are migrated.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache coherent with CPU")

When migrating to device private memory, setting the src_owner lets the caller
know about source pages that are already migrated and skips pages migrated to a
different device similar to pages swapped out an actual swap device.
But, it prevents normal pages from being migrated to device private memory.
It can still be useful for the driver to know that a page is owned by a
different device if it has a device to device way of migrating data.
nouveau_dmem_migrate_vma() isn't setting args.src_owner so only normal pages
will be migrated which I guess is OK for now since nouveau doesn't handle
direct GPU to GPU migration currently.

When migrating device private memory to system memory due to a CPU fault,
the source page should be the device's device private struct page so if it
isn't, then it does make sense to not migrate whatever normal page is there.
nouveau_dmem_migrate_to_ram() sets src_owner so this case looks OK.

Just had to think this through.
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>

> ---
>   arch/powerpc/kvm/book3s_hv_uvmem.c     | 1 +
>   drivers/gpu/drm/nouveau/nouveau_dmem.c | 1 +
>   include/linux/migrate.h                | 8 ++++++++
>   mm/migrate.c                           | 9 ++++++---
>   4 files changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 67fefb03b9b7..f44f6b27950f 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -563,6 +563,7 @@ kvmppc_svm_page_out(struct vm_area_struct *vma, unsigned long start,
>   	mig.end = end;
>   	mig.src = &src_pfn;
>   	mig.dst = &dst_pfn;
> +	mig.src_owner = &kvmppc_uvmem_pgmap;
>   
>   	mutex_lock(&kvm->arch.uvmem_lock);
>   	/* The requested page is already paged-out, nothing to do */
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index a4682272586e..0e36345d395c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -176,6 +176,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
>   		.end		= vmf->address + PAGE_SIZE,
>   		.src		= &src,
>   		.dst		= &dst,
> +		.src_owner	= drm->dev,
>   	};
>   
>   	/*
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 72120061b7d4..3e546cbf03dd 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -196,6 +196,14 @@ struct migrate_vma {
>   	unsigned long		npages;
>   	unsigned long		start;
>   	unsigned long		end;
> +
> +	/*
> +	 * Set to the owner value also stored in page->pgmap->owner for
> +	 * migrating out of device private memory.  If set only device
> +	 * private pages with this owner are migrated.  If not set
> +	 * device private pages are not migrated at all.
> +	 */
> +	void			*src_owner;
>   };
>   
>   int migrate_vma_setup(struct migrate_vma *args);
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b1092876e537..7605d2c23433 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2241,7 +2241,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   	arch_enter_lazy_mmu_mode();
>   
>   	for (; addr < end; addr += PAGE_SIZE, ptep++) {
> -		unsigned long mpfn, pfn;
> +		unsigned long mpfn = 0, pfn;
>   		struct page *page;
>   		swp_entry_t entry;
>   		pte_t pte;
> @@ -2255,8 +2255,6 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   		}
>   
>   		if (!pte_present(pte)) {
> -			mpfn = 0;
> -
>   			/*
>   			 * Only care about unaddressable device page special
>   			 * page table entry. Other special swap entries are not
> @@ -2267,11 +2265,16 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   				goto next;
>   
>   			page = device_private_entry_to_page(entry);
> +			if (page->pgmap->owner != migrate->src_owner)
> +				goto next;
> +
>   			mpfn = migrate_pfn(page_to_pfn(page)) |
>   					MIGRATE_PFN_MIGRATE;
>   			if (is_write_device_private_entry(entry))
>   				mpfn |= MIGRATE_PFN_WRITE;
>   		} else {
> +			if (migrate->src_owner)
> +				goto next;
>   			pfn = pte_pfn(pte);
>   			if (is_zero_pfn(pfn)) {
>   				mpfn = MIGRATE_PFN_MIGRATE;
> 

WARNING: multiple messages have this Message-ID (diff)
From: Ralph Campbell <rcampbell@nvidia.com>
To: "Christoph Hellwig" <hch@lst.de>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Bharata B Rao" <bharata@linux.ibm.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Ben Skeggs" <bskeggs@redhat.com>
Cc: kvm-ppc@vger.kernel.org, nouveau@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	Jerome Glisse <jglisse@redhat.com>,
	amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 2/4] mm: handle multiple owners of device private pages in migrate_vma
Date: Mon, 16 Mar 2020 14:43:13 -0700	[thread overview]
Message-ID: <6a110540-f954-9f0f-e785-7365135d2934@nvidia.com> (raw)
In-Reply-To: <20200316193216.920734-3-hch@lst.de>


On 3/16/20 12:32 PM, Christoph Hellwig wrote:
> Add a new src_owner field to struct migrate_vma.  If the field is set,
> only device private pages with page->pgmap->owner equal to that field
> are migrated.  If the field is not set only "normal" pages are migrated.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache coherent with CPU")

When migrating to device private memory, setting the src_owner lets the caller
know about source pages that are already migrated and skips pages migrated to a
different device similar to pages swapped out an actual swap device.
But, it prevents normal pages from being migrated to device private memory.
It can still be useful for the driver to know that a page is owned by a
different device if it has a device to device way of migrating data.
nouveau_dmem_migrate_vma() isn't setting args.src_owner so only normal pages
will be migrated which I guess is OK for now since nouveau doesn't handle
direct GPU to GPU migration currently.

When migrating device private memory to system memory due to a CPU fault,
the source page should be the device's device private struct page so if it
isn't, then it does make sense to not migrate whatever normal page is there.
nouveau_dmem_migrate_to_ram() sets src_owner so this case looks OK.

Just had to think this through.
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>

> ---
>   arch/powerpc/kvm/book3s_hv_uvmem.c     | 1 +
>   drivers/gpu/drm/nouveau/nouveau_dmem.c | 1 +
>   include/linux/migrate.h                | 8 ++++++++
>   mm/migrate.c                           | 9 ++++++---
>   4 files changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 67fefb03b9b7..f44f6b27950f 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -563,6 +563,7 @@ kvmppc_svm_page_out(struct vm_area_struct *vma, unsigned long start,
>   	mig.end = end;
>   	mig.src = &src_pfn;
>   	mig.dst = &dst_pfn;
> +	mig.src_owner = &kvmppc_uvmem_pgmap;
>   
>   	mutex_lock(&kvm->arch.uvmem_lock);
>   	/* The requested page is already paged-out, nothing to do */
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index a4682272586e..0e36345d395c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -176,6 +176,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
>   		.end		= vmf->address + PAGE_SIZE,
>   		.src		= &src,
>   		.dst		= &dst,
> +		.src_owner	= drm->dev,
>   	};
>   
>   	/*
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 72120061b7d4..3e546cbf03dd 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -196,6 +196,14 @@ struct migrate_vma {
>   	unsigned long		npages;
>   	unsigned long		start;
>   	unsigned long		end;
> +
> +	/*
> +	 * Set to the owner value also stored in page->pgmap->owner for
> +	 * migrating out of device private memory.  If set only device
> +	 * private pages with this owner are migrated.  If not set
> +	 * device private pages are not migrated at all.
> +	 */
> +	void			*src_owner;
>   };
>   
>   int migrate_vma_setup(struct migrate_vma *args);
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b1092876e537..7605d2c23433 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2241,7 +2241,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   	arch_enter_lazy_mmu_mode();
>   
>   	for (; addr < end; addr += PAGE_SIZE, ptep++) {
> -		unsigned long mpfn, pfn;
> +		unsigned long mpfn = 0, pfn;
>   		struct page *page;
>   		swp_entry_t entry;
>   		pte_t pte;
> @@ -2255,8 +2255,6 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   		}
>   
>   		if (!pte_present(pte)) {
> -			mpfn = 0;
> -
>   			/*
>   			 * Only care about unaddressable device page special
>   			 * page table entry. Other special swap entries are not
> @@ -2267,11 +2265,16 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   				goto next;
>   
>   			page = device_private_entry_to_page(entry);
> +			if (page->pgmap->owner != migrate->src_owner)
> +				goto next;
> +
>   			mpfn = migrate_pfn(page_to_pfn(page)) |
>   					MIGRATE_PFN_MIGRATE;
>   			if (is_write_device_private_entry(entry))
>   				mpfn |= MIGRATE_PFN_WRITE;
>   		} else {
> +			if (migrate->src_owner)
> +				goto next;
>   			pfn = pte_pfn(pte);
>   			if (is_zero_pfn(pfn)) {
>   				mpfn = MIGRATE_PFN_MIGRATE;
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Ralph Campbell <rcampbell@nvidia.com>
To: "Christoph Hellwig" <hch@lst.de>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Bharata B Rao" <bharata@linux.ibm.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Ben Skeggs" <bskeggs@redhat.com>
Cc: kvm-ppc@vger.kernel.org, nouveau@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	Jerome Glisse <jglisse@redhat.com>,
	amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 2/4] mm: handle multiple owners of device private pages in migrate_vma
Date: Mon, 16 Mar 2020 14:43:13 -0700	[thread overview]
Message-ID: <6a110540-f954-9f0f-e785-7365135d2934@nvidia.com> (raw)
In-Reply-To: <20200316193216.920734-3-hch@lst.de>


On 3/16/20 12:32 PM, Christoph Hellwig wrote:
> Add a new src_owner field to struct migrate_vma.  If the field is set,
> only device private pages with page->pgmap->owner equal to that field
> are migrated.  If the field is not set only "normal" pages are migrated.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache coherent with CPU")

When migrating to device private memory, setting the src_owner lets the caller
know about source pages that are already migrated and skips pages migrated to a
different device similar to pages swapped out an actual swap device.
But, it prevents normal pages from being migrated to device private memory.
It can still be useful for the driver to know that a page is owned by a
different device if it has a device to device way of migrating data.
nouveau_dmem_migrate_vma() isn't setting args.src_owner so only normal pages
will be migrated which I guess is OK for now since nouveau doesn't handle
direct GPU to GPU migration currently.

When migrating device private memory to system memory due to a CPU fault,
the source page should be the device's device private struct page so if it
isn't, then it does make sense to not migrate whatever normal page is there.
nouveau_dmem_migrate_to_ram() sets src_owner so this case looks OK.

Just had to think this through.
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>

> ---
>   arch/powerpc/kvm/book3s_hv_uvmem.c     | 1 +
>   drivers/gpu/drm/nouveau/nouveau_dmem.c | 1 +
>   include/linux/migrate.h                | 8 ++++++++
>   mm/migrate.c                           | 9 ++++++---
>   4 files changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 67fefb03b9b7..f44f6b27950f 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -563,6 +563,7 @@ kvmppc_svm_page_out(struct vm_area_struct *vma, unsigned long start,
>   	mig.end = end;
>   	mig.src = &src_pfn;
>   	mig.dst = &dst_pfn;
> +	mig.src_owner = &kvmppc_uvmem_pgmap;
>   
>   	mutex_lock(&kvm->arch.uvmem_lock);
>   	/* The requested page is already paged-out, nothing to do */
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index a4682272586e..0e36345d395c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -176,6 +176,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
>   		.end		= vmf->address + PAGE_SIZE,
>   		.src		= &src,
>   		.dst		= &dst,
> +		.src_owner	= drm->dev,
>   	};
>   
>   	/*
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 72120061b7d4..3e546cbf03dd 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -196,6 +196,14 @@ struct migrate_vma {
>   	unsigned long		npages;
>   	unsigned long		start;
>   	unsigned long		end;
> +
> +	/*
> +	 * Set to the owner value also stored in page->pgmap->owner for
> +	 * migrating out of device private memory.  If set only device
> +	 * private pages with this owner are migrated.  If not set
> +	 * device private pages are not migrated at all.
> +	 */
> +	void			*src_owner;
>   };
>   
>   int migrate_vma_setup(struct migrate_vma *args);
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b1092876e537..7605d2c23433 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2241,7 +2241,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   	arch_enter_lazy_mmu_mode();
>   
>   	for (; addr < end; addr += PAGE_SIZE, ptep++) {
> -		unsigned long mpfn, pfn;
> +		unsigned long mpfn = 0, pfn;
>   		struct page *page;
>   		swp_entry_t entry;
>   		pte_t pte;
> @@ -2255,8 +2255,6 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   		}
>   
>   		if (!pte_present(pte)) {
> -			mpfn = 0;
> -
>   			/*
>   			 * Only care about unaddressable device page special
>   			 * page table entry. Other special swap entries are not
> @@ -2267,11 +2265,16 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   				goto next;
>   
>   			page = device_private_entry_to_page(entry);
> +			if (page->pgmap->owner != migrate->src_owner)
> +				goto next;
> +
>   			mpfn = migrate_pfn(page_to_pfn(page)) |
>   					MIGRATE_PFN_MIGRATE;
>   			if (is_write_device_private_entry(entry))
>   				mpfn |= MIGRATE_PFN_WRITE;
>   		} else {
> +			if (migrate->src_owner)
> +				goto next;
>   			pfn = pte_pfn(pte);
>   			if (is_zero_pfn(pfn)) {
>   				mpfn = MIGRATE_PFN_MIGRATE;
> 
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

WARNING: multiple messages have this Message-ID (diff)
From: Ralph Campbell <rcampbell@nvidia.com>
To: "Christoph Hellwig" <hch@lst.de>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Bharata B Rao" <bharata@linux.ibm.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Ben Skeggs" <bskeggs@redhat.com>
Cc: Jerome Glisse <jglisse@redhat.com>,
	kvm-ppc@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 2/4] mm: handle multiple owners of device private pages in migrate_vma
Date: Mon, 16 Mar 2020 21:43:13 +0000	[thread overview]
Message-ID: <6a110540-f954-9f0f-e785-7365135d2934@nvidia.com> (raw)
In-Reply-To: <20200316193216.920734-3-hch@lst.de>


On 3/16/20 12:32 PM, Christoph Hellwig wrote:
> Add a new src_owner field to struct migrate_vma.  If the field is set,
> only device private pages with page->pgmap->owner equal to that field
> are migrated.  If the field is not set only "normal" pages are migrated.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache coherent with CPU")

When migrating to device private memory, setting the src_owner lets the caller
know about source pages that are already migrated and skips pages migrated to a
different device similar to pages swapped out an actual swap device.
But, it prevents normal pages from being migrated to device private memory.
It can still be useful for the driver to know that a page is owned by a
different device if it has a device to device way of migrating data.
nouveau_dmem_migrate_vma() isn't setting args.src_owner so only normal pages
will be migrated which I guess is OK for now since nouveau doesn't handle
direct GPU to GPU migration currently.

When migrating device private memory to system memory due to a CPU fault,
the source page should be the device's device private struct page so if it
isn't, then it does make sense to not migrate whatever normal page is there.
nouveau_dmem_migrate_to_ram() sets src_owner so this case looks OK.

Just had to think this through.
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>

> ---
>   arch/powerpc/kvm/book3s_hv_uvmem.c     | 1 +
>   drivers/gpu/drm/nouveau/nouveau_dmem.c | 1 +
>   include/linux/migrate.h                | 8 ++++++++
>   mm/migrate.c                           | 9 ++++++---
>   4 files changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 67fefb03b9b7..f44f6b27950f 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -563,6 +563,7 @@ kvmppc_svm_page_out(struct vm_area_struct *vma, unsigned long start,
>   	mig.end = end;
>   	mig.src = &src_pfn;
>   	mig.dst = &dst_pfn;
> +	mig.src_owner = &kvmppc_uvmem_pgmap;
>   
>   	mutex_lock(&kvm->arch.uvmem_lock);
>   	/* The requested page is already paged-out, nothing to do */
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index a4682272586e..0e36345d395c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -176,6 +176,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
>   		.end		= vmf->address + PAGE_SIZE,
>   		.src		= &src,
>   		.dst		= &dst,
> +		.src_owner	= drm->dev,
>   	};
>   
>   	/*
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 72120061b7d4..3e546cbf03dd 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -196,6 +196,14 @@ struct migrate_vma {
>   	unsigned long		npages;
>   	unsigned long		start;
>   	unsigned long		end;
> +
> +	/*
> +	 * Set to the owner value also stored in page->pgmap->owner for
> +	 * migrating out of device private memory.  If set only device
> +	 * private pages with this owner are migrated.  If not set
> +	 * device private pages are not migrated at all.
> +	 */
> +	void			*src_owner;
>   };
>   
>   int migrate_vma_setup(struct migrate_vma *args);
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b1092876e537..7605d2c23433 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2241,7 +2241,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   	arch_enter_lazy_mmu_mode();
>   
>   	for (; addr < end; addr += PAGE_SIZE, ptep++) {
> -		unsigned long mpfn, pfn;
> +		unsigned long mpfn = 0, pfn;
>   		struct page *page;
>   		swp_entry_t entry;
>   		pte_t pte;
> @@ -2255,8 +2255,6 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   		}
>   
>   		if (!pte_present(pte)) {
> -			mpfn = 0;
> -
>   			/*
>   			 * Only care about unaddressable device page special
>   			 * page table entry. Other special swap entries are not
> @@ -2267,11 +2265,16 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   				goto next;
>   
>   			page = device_private_entry_to_page(entry);
> +			if (page->pgmap->owner != migrate->src_owner)
> +				goto next;
> +
>   			mpfn = migrate_pfn(page_to_pfn(page)) |
>   					MIGRATE_PFN_MIGRATE;
>   			if (is_write_device_private_entry(entry))
>   				mpfn |= MIGRATE_PFN_WRITE;
>   		} else {
> +			if (migrate->src_owner)
> +				goto next;
>   			pfn = pte_pfn(pte);
>   			if (is_zero_pfn(pfn)) {
>   				mpfn = MIGRATE_PFN_MIGRATE;
> 

  reply	other threads:[~2020-03-16 21:43 UTC|newest]

Thread overview: 196+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-16 19:32 ensure device private pages have an owner v2 Christoph Hellwig
2020-03-16 19:32 ` Christoph Hellwig
2020-03-16 19:32 ` Christoph Hellwig
2020-03-16 19:32 ` [PATCH 1/4] memremap: add an owner field to struct dev_pagemap Christoph Hellwig
2020-03-16 19:32   ` Christoph Hellwig
2020-03-16 19:32   ` Christoph Hellwig
2020-03-16 20:55   ` Ralph Campbell
2020-03-16 20:55     ` Ralph Campbell
2020-03-16 20:55     ` Ralph Campbell
2020-03-16 20:55     ` Ralph Campbell
2020-03-16 20:55     ` Ralph Campbell
2020-03-16 19:32 ` [PATCH 2/4] mm: handle multiple owners of device private pages in migrate_vma Christoph Hellwig
2020-03-16 19:32   ` Christoph Hellwig
2020-03-16 19:32   ` Christoph Hellwig
2020-03-16 21:43   ` Ralph Campbell [this message]
2020-03-16 21:43     ` Ralph Campbell
2020-03-16 21:43     ` Ralph Campbell
2020-03-16 21:43     ` Ralph Campbell
2020-03-16 21:43     ` Ralph Campbell
2020-03-16 19:32 ` [PATCH 3/4] mm: simplify device private page handling in hmm_range_fault Christoph Hellwig
2020-03-16 19:32   ` Christoph Hellwig
2020-03-16 19:32   ` Christoph Hellwig
2020-03-16 19:59   ` Jason Gunthorpe
2020-03-16 19:59     ` Jason Gunthorpe
2020-03-16 19:59     ` Jason Gunthorpe
2020-03-16 19:59     ` Jason Gunthorpe
2020-03-16 21:33     ` Christoph Hellwig
2020-03-16 21:33       ` Christoph Hellwig
2020-03-16 21:33       ` Christoph Hellwig
2020-03-16 22:49   ` Ralph Campbell
2020-03-16 22:49     ` Ralph Campbell
2020-03-16 22:49     ` Ralph Campbell
2020-03-16 22:49     ` Ralph Campbell
2020-03-16 22:49     ` Ralph Campbell
2020-03-17  7:34     ` Christoph Hellwig
2020-03-17  7:34       ` Christoph Hellwig
2020-03-17  7:34       ` Christoph Hellwig
2020-03-17 22:43       ` Ralph Campbell
2020-03-17 22:43         ` Ralph Campbell
2020-03-17 22:43         ` Ralph Campbell
2020-03-17 22:43         ` Ralph Campbell
2020-03-18  9:34         ` Christoph Hellwig
2020-03-18  9:34           ` Christoph Hellwig
2020-03-18  9:34           ` Christoph Hellwig
2020-03-17 12:15     ` Jason Gunthorpe
2020-03-17 12:15       ` Jason Gunthorpe
2020-03-17 12:15       ` Jason Gunthorpe
2020-03-17 12:15       ` Jason Gunthorpe
2020-03-17 12:24       ` Christoph Hellwig
2020-03-17 12:24         ` Christoph Hellwig
2020-03-17 12:24         ` Christoph Hellwig
2020-03-17 12:28         ` Christoph Hellwig
2020-03-17 12:28           ` Christoph Hellwig
2020-03-17 12:28           ` Christoph Hellwig
2020-03-17 12:47           ` Jason Gunthorpe
2020-03-17 12:47             ` Jason Gunthorpe
2020-03-17 12:47             ` Jason Gunthorpe
2020-03-17 12:47             ` Jason Gunthorpe
2020-03-17 12:59             ` Christoph Hellwig
2020-03-17 12:59               ` Christoph Hellwig
2020-03-17 12:59               ` Christoph Hellwig
2020-03-17 17:32               ` Jason Gunthorpe
2020-03-17 17:32                 ` Jason Gunthorpe
2020-03-17 17:32                 ` Jason Gunthorpe
2020-03-17 17:32                 ` Jason Gunthorpe
2020-03-17 17:32                 ` Jason Gunthorpe
2020-03-17 23:14               ` Ralph Campbell
2020-03-17 23:14                 ` Ralph Campbell
2020-03-17 23:14                 ` Ralph Campbell
2020-03-17 23:14                 ` Ralph Campbell
2020-03-19 18:17                 ` Jason Gunthorpe
2020-03-19 18:17                   ` Jason Gunthorpe
2020-03-19 18:17                   ` Jason Gunthorpe
2020-03-19 18:17                   ` Jason Gunthorpe
2020-03-19 22:56                   ` Ralph Campbell
2020-03-19 22:56                     ` Ralph Campbell
2020-03-19 22:56                     ` Ralph Campbell
2020-03-19 22:56                     ` Ralph Campbell
2020-03-20  0:03                     ` Jason Gunthorpe
2020-03-20  0:03                       ` Jason Gunthorpe
2020-03-20  0:03                       ` Jason Gunthorpe
2020-03-20  0:03                       ` Jason Gunthorpe
2020-03-21  8:20                       ` Christoph Hellwig
2020-03-21  8:20                         ` Christoph Hellwig
2020-03-21  8:20                         ` Christoph Hellwig
2020-03-20  0:14                 ` Jason Gunthorpe
2020-03-20  0:14                   ` Jason Gunthorpe
2020-03-20  0:14                   ` Jason Gunthorpe
2020-03-20  0:14                   ` Jason Gunthorpe
2020-03-20  1:33                   ` Ralph Campbell
2020-03-20  1:33                     ` Ralph Campbell
2020-03-20  1:33                     ` Ralph Campbell
2020-03-20  1:33                     ` Ralph Campbell
2020-03-20 12:58                     ` Jason Gunthorpe
2020-03-20 12:58                       ` Jason Gunthorpe
2020-03-20 12:58                       ` Jason Gunthorpe
2020-03-20 12:58                       ` Jason Gunthorpe
2020-03-20 12:58                       ` Jason Gunthorpe
2020-03-16 19:32 ` [PATCH 4/4] mm: check the device private page owner " Christoph Hellwig
2020-03-16 19:32   ` Christoph Hellwig
2020-03-16 19:32   ` Christoph Hellwig
2020-03-16 19:49   ` Jason Gunthorpe
2020-03-16 19:49     ` Jason Gunthorpe
2020-03-16 19:49     ` Jason Gunthorpe
2020-03-16 19:49     ` Jason Gunthorpe
2020-03-16 23:11   ` Ralph Campbell
2020-03-16 23:11     ` Ralph Campbell
2020-03-16 23:11     ` Ralph Campbell
2020-03-16 23:11     ` Ralph Campbell
2020-03-16 23:11     ` Ralph Campbell
2020-03-20 13:41   ` Jason Gunthorpe
2020-03-20 13:41     ` Jason Gunthorpe
2020-03-20 13:41     ` Jason Gunthorpe
2020-03-20 13:41     ` Jason Gunthorpe
2020-03-21  8:22     ` Christoph Hellwig
2020-03-21  8:22       ` Christoph Hellwig
2020-03-21  8:22       ` Christoph Hellwig
2020-03-21 12:38       ` Jason Gunthorpe
2020-03-21 12:38         ` Jason Gunthorpe
2020-03-21 12:38         ` Jason Gunthorpe
2020-03-21 12:38         ` Jason Gunthorpe
2020-03-21 15:18         ` Christoph Hellwig
2020-03-21 15:18           ` Christoph Hellwig
2020-03-21 15:18           ` Christoph Hellwig
2020-03-17  5:31 ` ensure device private pages have an owner v2 Bharata B Rao
2020-03-17  5:43   ` Bharata B Rao
2020-03-17  5:31   ` Bharata B Rao
2020-03-19  0:28 ` Jason Gunthorpe
2020-03-19  0:28   ` Jason Gunthorpe
2020-03-19  0:28   ` Jason Gunthorpe
2020-03-19  0:28   ` Jason Gunthorpe
2020-03-19  7:16   ` Christoph Hellwig
2020-03-19  7:16     ` Christoph Hellwig
2020-03-19  7:16     ` Christoph Hellwig
2020-03-19 11:50     ` Jason Gunthorpe
2020-03-19 11:50       ` Jason Gunthorpe
2020-03-19 11:50       ` Jason Gunthorpe
2020-03-19 11:50       ` Jason Gunthorpe
2020-03-19 18:50     ` Jason Gunthorpe
2020-03-19 18:50       ` Jason Gunthorpe
2020-03-19 18:50       ` Jason Gunthorpe
2020-03-19 18:50       ` Jason Gunthorpe
  -- strict thread matches above, loose matches on Subject: below --
2020-03-16 17:52 ensure device private pages have an owner Christoph Hellwig
2020-03-16 17:52 ` Christoph Hellwig
2020-03-16 17:52 ` Christoph Hellwig
2020-03-16 17:52 ` [PATCH 1/2] mm: handle multiple owners of device private pages in migrate_vma Christoph Hellwig
2020-03-16 17:52   ` Christoph Hellwig
2020-03-16 17:52   ` Christoph Hellwig
2020-03-16 18:17   ` Jason Gunthorpe
2020-03-16 18:17     ` Jason Gunthorpe
2020-03-16 18:17     ` Jason Gunthorpe
2020-03-16 18:17     ` Jason Gunthorpe
2020-03-16 18:20     ` Christoph Hellwig
2020-03-16 18:20       ` Christoph Hellwig
2020-03-16 18:20       ` Christoph Hellwig
2020-03-16 17:52 ` [PATCH 2/2] mm: remove device private page support from hmm_range_fault Christoph Hellwig
2020-03-16 17:52   ` Christoph Hellwig
2020-03-16 17:52   ` Christoph Hellwig
2020-03-16 18:42   ` Ralph Campbell
2020-03-16 18:42     ` Ralph Campbell
2020-03-16 18:42     ` Ralph Campbell
2020-03-16 18:42     ` Ralph Campbell
2020-03-16 18:42     ` Ralph Campbell
2020-03-16 18:49     ` Christoph Hellwig
2020-03-16 18:49       ` Christoph Hellwig
2020-03-16 18:49       ` Christoph Hellwig
2020-03-16 18:58       ` Christoph Hellwig
2020-03-16 18:58         ` Christoph Hellwig
2020-03-16 18:58         ` Christoph Hellwig
2020-03-16 19:56       ` Ralph Campbell
2020-03-16 19:56         ` Ralph Campbell
2020-03-16 19:56         ` Ralph Campbell
2020-03-16 19:56         ` Ralph Campbell
2020-03-16 20:09       ` Jason Gunthorpe
2020-03-16 20:09         ` Jason Gunthorpe
2020-03-16 20:09         ` Jason Gunthorpe
2020-03-16 20:09         ` Jason Gunthorpe
2020-03-16 20:24         ` Ralph Campbell
2020-03-16 20:24           ` Ralph Campbell
2020-03-16 20:24           ` Ralph Campbell
2020-03-16 20:24           ` Ralph Campbell
2020-03-17 11:56           ` Jason Gunthorpe
2020-03-17 11:56             ` Jason Gunthorpe
2020-03-17 11:56             ` Jason Gunthorpe
2020-03-17 11:56             ` Jason Gunthorpe
2020-03-17 22:46             ` Ralph Campbell
2020-03-17 22:46               ` Ralph Campbell
2020-03-17 22:46               ` Ralph Campbell
2020-03-17 22:46               ` Ralph Campbell
2020-03-16 19:04     ` Jason Gunthorpe
2020-03-16 19:04       ` Jason Gunthorpe
2020-03-16 19:04       ` Jason Gunthorpe
2020-03-16 19:04       ` Jason Gunthorpe
2020-03-16 19:07       ` Christoph Hellwig
2020-03-16 19:07         ` Christoph Hellwig
2020-03-16 19:07         ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6a110540-f954-9f0f-e785-7365135d2934@nvidia.com \
    --to=rcampbell@nvidia.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=bharata@linux.ibm.com \
    --cc=bskeggs@redhat.com \
    --cc=christian.koenig@amd.com \
    --cc=dan.j.williams@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nouveau@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.