linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] mm/migrate: optimize migrate_vma_setup() for holes
@ 2020-07-09 16:57 Ralph Campbell
  2020-07-09 16:57 ` [PATCH 1/2] " Ralph Campbell
  2020-07-09 16:57 ` [PATCH 2/2] mm/migrate: add migrate-shared test for migrate_vma_*() Ralph Campbell
  0 siblings, 2 replies; 5+ messages in thread
From: Ralph Campbell @ 2020-07-09 16:57 UTC (permalink / raw)
  To: linux-mm, kvm-ppc, linux-kselftest, linux-kernel
  Cc: Jerome Glisse, John Hubbard, Christoph Hellwig, Jason Gunthorpe,
	Bharata B Rao, Shuah Khan, Andrew Morton, Ralph Campbell

A simple optimization for migrate_vma_*() when the source vma is not an
anonymous vma and a new test case to exercise it.
This is based on linux-mm and is for Andrew Morton's tree.

Ralph Campbell (2):
  mm/migrate: optimize migrate_vma_setup() for holes
  mm/migrate: add migrate-shared test for migrate_vma_*()

 mm/migrate.c                           |  6 ++++-
 tools/testing/selftests/vm/hmm-tests.c | 35 ++++++++++++++++++++++++++
 2 files changed, 40 insertions(+), 1 deletion(-)

-- 
2.20.1



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] mm/migrate: optimize migrate_vma_setup() for holes
  2020-07-09 16:57 [PATCH 0/2] mm/migrate: optimize migrate_vma_setup() for holes Ralph Campbell
@ 2020-07-09 16:57 ` Ralph Campbell
  2020-07-10  6:35   ` Bharata B Rao
  2020-07-09 16:57 ` [PATCH 2/2] mm/migrate: add migrate-shared test for migrate_vma_*() Ralph Campbell
  1 sibling, 1 reply; 5+ messages in thread
From: Ralph Campbell @ 2020-07-09 16:57 UTC (permalink / raw)
  To: linux-mm, kvm-ppc, linux-kselftest, linux-kernel
  Cc: Jerome Glisse, John Hubbard, Christoph Hellwig, Jason Gunthorpe,
	Bharata B Rao, Shuah Khan, Andrew Morton, Ralph Campbell

When migrating system memory to device private memory, if the source
address range is a valid VMA range and there is no memory or a zero page,
the source PFN array is marked as valid but with no PFN. This lets the
device driver allocate private memory and clear it, then insert the new
device private struct page into the CPU's page tables when
migrate_vma_pages() is called. migrate_vma_pages() only inserts the
new page if the VMA is an anonymous range. There is no point in telling
the device driver to allocate device private memory and then not migrate
the page. Instead, mark the source PFN array entries as not migrating to
avoid this overhead.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
---
 mm/migrate.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index b0125c082549..8aa434691577 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2204,9 +2204,13 @@ static int migrate_vma_collect_hole(unsigned long start,
 {
 	struct migrate_vma *migrate = walk->private;
 	unsigned long addr;
+	unsigned long flags;
+
+	/* Only allow populating anonymous memory. */
+	flags = vma_is_anonymous(walk->vma) ? MIGRATE_PFN_MIGRATE : 0;
 
 	for (addr = start; addr < end; addr += PAGE_SIZE) {
-		migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
+		migrate->src[migrate->npages] = flags;
 		migrate->dst[migrate->npages] = 0;
 		migrate->npages++;
 		migrate->cpages++;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] mm/migrate: add migrate-shared test for migrate_vma_*()
  2020-07-09 16:57 [PATCH 0/2] mm/migrate: optimize migrate_vma_setup() for holes Ralph Campbell
  2020-07-09 16:57 ` [PATCH 1/2] " Ralph Campbell
@ 2020-07-09 16:57 ` Ralph Campbell
  1 sibling, 0 replies; 5+ messages in thread
From: Ralph Campbell @ 2020-07-09 16:57 UTC (permalink / raw)
  To: linux-mm, kvm-ppc, linux-kselftest, linux-kernel
  Cc: Jerome Glisse, John Hubbard, Christoph Hellwig, Jason Gunthorpe,
	Bharata B Rao, Shuah Khan, Andrew Morton, Ralph Campbell

Add a migrate_vma_*() self test for mmap(MAP_SHARED) to verify that
!vma_anonymous() ranges won't be migrated.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
---
 tools/testing/selftests/vm/hmm-tests.c | 35 ++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c
index 79db22604019..e83d3ab37697 100644
--- a/tools/testing/selftests/vm/hmm-tests.c
+++ b/tools/testing/selftests/vm/hmm-tests.c
@@ -931,6 +931,41 @@ TEST_F(hmm, migrate_fault)
 	hmm_buffer_free(buffer);
 }
 
+/*
+ * Migrate anonymous shared memory to device private memory.
+ */
+TEST_F(hmm, migrate_shared)
+{
+	struct hmm_buffer *buffer;
+	unsigned long npages;
+	unsigned long size;
+	int ret;
+
+	npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift;
+	ASSERT_NE(npages, 0);
+	size = npages << self->page_shift;
+
+	buffer = malloc(sizeof(*buffer));
+	ASSERT_NE(buffer, NULL);
+
+	buffer->fd = -1;
+	buffer->size = size;
+	buffer->mirror = malloc(size);
+	ASSERT_NE(buffer->mirror, NULL);
+
+	buffer->ptr = mmap(NULL, size,
+			   PROT_READ | PROT_WRITE,
+			   MAP_SHARED | MAP_ANONYMOUS,
+			   buffer->fd, 0);
+	ASSERT_NE(buffer->ptr, MAP_FAILED);
+
+	/* Migrate memory to device. */
+	ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_MIGRATE, buffer, npages);
+	ASSERT_EQ(ret, -ENOENT);
+
+	hmm_buffer_free(buffer);
+}
+
 /*
  * Try to migrate various memory types to device private memory.
  */
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] mm/migrate: optimize migrate_vma_setup() for holes
  2020-07-09 16:57 ` [PATCH 1/2] " Ralph Campbell
@ 2020-07-10  6:35   ` Bharata B Rao
  2020-07-10 16:19     ` Ralph Campbell
  0 siblings, 1 reply; 5+ messages in thread
From: Bharata B Rao @ 2020-07-10  6:35 UTC (permalink / raw)
  To: Ralph Campbell
  Cc: linux-mm, kvm-ppc, linux-kselftest, linux-kernel, Jerome Glisse,
	John Hubbard, Christoph Hellwig, Jason Gunthorpe, Shuah Khan,
	Andrew Morton

On Thu, Jul 09, 2020 at 09:57:10AM -0700, Ralph Campbell wrote:
> When migrating system memory to device private memory, if the source
> address range is a valid VMA range and there is no memory or a zero page,
> the source PFN array is marked as valid but with no PFN. This lets the
> device driver allocate private memory and clear it, then insert the new
> device private struct page into the CPU's page tables when
> migrate_vma_pages() is called. migrate_vma_pages() only inserts the
> new page if the VMA is an anonymous range. There is no point in telling
> the device driver to allocate device private memory and then not migrate
> the page. Instead, mark the source PFN array entries as not migrating to
> avoid this overhead.
> 
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> ---
>  mm/migrate.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b0125c082549..8aa434691577 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2204,9 +2204,13 @@ static int migrate_vma_collect_hole(unsigned long start,
>  {
>  	struct migrate_vma *migrate = walk->private;
>  	unsigned long addr;
> +	unsigned long flags;
> +
> +	/* Only allow populating anonymous memory. */
> +	flags = vma_is_anonymous(walk->vma) ? MIGRATE_PFN_MIGRATE : 0;
>  
>  	for (addr = start; addr < end; addr += PAGE_SIZE) {
> -		migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
> +		migrate->src[migrate->npages] = flags;

I see a few other such cases where we directly populate MIGRATE_PFN_MIGRATE
w/o a pfn in migrate_vma_collect_pmd() and wonder why the vma_is_anonymous()
check can't help there as well?

1. pte_none() check in migrate_vma_collect_pmd()
2. is_zero_pfn() check in migrate_vma_collect_pmd()

Regards,
Bharata.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] mm/migrate: optimize migrate_vma_setup() for holes
  2020-07-10  6:35   ` Bharata B Rao
@ 2020-07-10 16:19     ` Ralph Campbell
  0 siblings, 0 replies; 5+ messages in thread
From: Ralph Campbell @ 2020-07-10 16:19 UTC (permalink / raw)
  To: bharata
  Cc: linux-mm, kvm-ppc, linux-kselftest, linux-kernel, Jerome Glisse,
	John Hubbard, Christoph Hellwig, Jason Gunthorpe, Shuah Khan,
	Andrew Morton


On 7/9/20 11:35 PM, Bharata B Rao wrote:
> On Thu, Jul 09, 2020 at 09:57:10AM -0700, Ralph Campbell wrote:
>> When migrating system memory to device private memory, if the source
>> address range is a valid VMA range and there is no memory or a zero page,
>> the source PFN array is marked as valid but with no PFN. This lets the
>> device driver allocate private memory and clear it, then insert the new
>> device private struct page into the CPU's page tables when
>> migrate_vma_pages() is called. migrate_vma_pages() only inserts the
>> new page if the VMA is an anonymous range. There is no point in telling
>> the device driver to allocate device private memory and then not migrate
>> the page. Instead, mark the source PFN array entries as not migrating to
>> avoid this overhead.
>>
>> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
>> ---
>>   mm/migrate.c | 6 +++++-
>>   1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index b0125c082549..8aa434691577 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -2204,9 +2204,13 @@ static int migrate_vma_collect_hole(unsigned long start,
>>   {
>>   	struct migrate_vma *migrate = walk->private;
>>   	unsigned long addr;
>> +	unsigned long flags;
>> +
>> +	/* Only allow populating anonymous memory. */
>> +	flags = vma_is_anonymous(walk->vma) ? MIGRATE_PFN_MIGRATE : 0;
>>   
>>   	for (addr = start; addr < end; addr += PAGE_SIZE) {
>> -		migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
>> +		migrate->src[migrate->npages] = flags;
> 
> I see a few other such cases where we directly populate MIGRATE_PFN_MIGRATE
> w/o a pfn in migrate_vma_collect_pmd() and wonder why the vma_is_anonymous()
> check can't help there as well?
> 
> 1. pte_none() check in migrate_vma_collect_pmd()
> 2. is_zero_pfn() check in migrate_vma_collect_pmd()
> 
> Regards,
> Bharata.

For case 1, this seems like a useful addition.
For case 2, the zero page is only inserted if the VMA is marked read-only and
anonymous so I don't think the check is needed.
I'll post a v2 with the change.

Thanks for the suggestions!


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-07-10 16:20 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-09 16:57 [PATCH 0/2] mm/migrate: optimize migrate_vma_setup() for holes Ralph Campbell
2020-07-09 16:57 ` [PATCH 1/2] " Ralph Campbell
2020-07-10  6:35   ` Bharata B Rao
2020-07-10 16:19     ` Ralph Campbell
2020-07-09 16:57 ` [PATCH 2/2] mm/migrate: add migrate-shared test for migrate_vma_*() Ralph Campbell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).