* [PATCH 1/5] mm/migrate.c: remove unnecessary VM_BUG_ON_PAGE on putback_movable_page()
2021-03-20 9:36 [PATCH 0/5] Cleanup and fixup for mm/migrate.c Miaohe Lin
@ 2021-03-20 9:36 ` Miaohe Lin
2021-03-20 14:11 ` Rafael Aquini
2021-03-20 9:36 ` [PATCH 2/5] mm/migrate.c: remove unnecessary rc != MIGRATEPAGE_SUCCESS check in 'else' case Miaohe Lin
` (3 subsequent siblings)
4 siblings, 1 reply; 15+ messages in thread
From: Miaohe Lin @ 2021-03-20 9:36 UTC (permalink / raw)
To: akpm; +Cc: jglisse, shy828301, linux-kernel, linux-mm, linmiaohe
The !PageLocked() check is implicitly done in PageMovable(). Remove this
explicit one.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/migrate.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 47df0df8f21a..e4ca5ef508ea 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -146,7 +146,6 @@ void putback_movable_page(struct page *page)
struct address_space *mapping;
VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE(!PageMovable(page), page);
VM_BUG_ON_PAGE(!PageIsolated(page), page);
mapping = page_mapping(page);
--
2.19.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 1/5] mm/migrate.c: remove unnecessary VM_BUG_ON_PAGE on putback_movable_page()
2021-03-20 9:36 ` [PATCH 1/5] mm/migrate.c: remove unnecessary VM_BUG_ON_PAGE on putback_movable_page() Miaohe Lin
@ 2021-03-20 14:11 ` Rafael Aquini
2021-03-22 1:52 ` Miaohe Lin
0 siblings, 1 reply; 15+ messages in thread
From: Rafael Aquini @ 2021-03-20 14:11 UTC (permalink / raw)
To: Miaohe Lin; +Cc: akpm, jglisse, shy828301, linux-kernel, linux-mm
On Sat, Mar 20, 2021 at 05:36:57AM -0400, Miaohe Lin wrote:
> The !PageLocked() check is implicitly done in PageMovable(). Remove this
> explicit one.
>
I'd just keep the explicit assertion in place, regardless.
But if you're going to stick with this patch, please fix it because it's
removing the wrong assertion.
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
> mm/migrate.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 47df0df8f21a..e4ca5ef508ea 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -146,7 +146,6 @@ void putback_movable_page(struct page *page)
> struct address_space *mapping;
>
> VM_BUG_ON_PAGE(!PageLocked(page), page);
> - VM_BUG_ON_PAGE(!PageMovable(page), page);
> VM_BUG_ON_PAGE(!PageIsolated(page), page);
>
> mapping = page_mapping(page);
> --
> 2.19.1
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/5] mm/migrate.c: remove unnecessary VM_BUG_ON_PAGE on putback_movable_page()
2021-03-20 14:11 ` Rafael Aquini
@ 2021-03-22 1:52 ` Miaohe Lin
0 siblings, 0 replies; 15+ messages in thread
From: Miaohe Lin @ 2021-03-22 1:52 UTC (permalink / raw)
To: Rafael Aquini; +Cc: akpm, jglisse, shy828301, linux-kernel, linux-mm
Hi:
On 2021/3/20 22:11, Rafael Aquini wrote:
> On Sat, Mar 20, 2021 at 05:36:57AM -0400, Miaohe Lin wrote:
>> The !PageLocked() check is implicitly done in PageMovable(). Remove this
>> explicit one.
>>
>
> I'd just keep the explicit assertion in place, regardless.
> But if you're going to stick with this patch, please fix it because it's
> removing the wrong assertion.
Many thanks for your reply. I should have a coffee when I make this patch! :(
Will fix it. Many thanks for remind!
>
>
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>> mm/migrate.c | 1 -
>> 1 file changed, 1 deletion(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 47df0df8f21a..e4ca5ef508ea 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -146,7 +146,6 @@ void putback_movable_page(struct page *page)
>> struct address_space *mapping;
>>
>> VM_BUG_ON_PAGE(!PageLocked(page), page);
>> - VM_BUG_ON_PAGE(!PageMovable(page), page);
>> VM_BUG_ON_PAGE(!PageIsolated(page), page);
>>
>> mapping = page_mapping(page);
>> --
>> 2.19.1
>>
>>
>
> .
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/5] mm/migrate.c: remove unnecessary rc != MIGRATEPAGE_SUCCESS check in 'else' case
2021-03-20 9:36 [PATCH 0/5] Cleanup and fixup for mm/migrate.c Miaohe Lin
2021-03-20 9:36 ` [PATCH 1/5] mm/migrate.c: remove unnecessary VM_BUG_ON_PAGE on putback_movable_page() Miaohe Lin
@ 2021-03-20 9:36 ` Miaohe Lin
2021-03-23 10:23 ` David Hildenbrand
2021-03-20 9:36 ` [PATCH 3/5] mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page() Miaohe Lin
` (2 subsequent siblings)
4 siblings, 1 reply; 15+ messages in thread
From: Miaohe Lin @ 2021-03-20 9:36 UTC (permalink / raw)
To: akpm; +Cc: jglisse, shy828301, linux-kernel, linux-mm, linmiaohe
It's guaranteed that in the 'else' case of the rc == MIGRATEPAGE_SUCCESS
check, rc does not equal to MIGRATEPAGE_SUCCESS. Remove this unnecessary
check.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/migrate.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index e4ca5ef508ea..20a3bf75270a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1374,7 +1374,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
out:
if (rc == MIGRATEPAGE_SUCCESS)
putback_active_hugepage(hpage);
- else if (rc != -EAGAIN && rc != MIGRATEPAGE_SUCCESS)
+ else if (rc != -EAGAIN)
list_move_tail(&hpage->lru, ret);
/*
--
2.19.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 2/5] mm/migrate.c: remove unnecessary rc != MIGRATEPAGE_SUCCESS check in 'else' case
2021-03-20 9:36 ` [PATCH 2/5] mm/migrate.c: remove unnecessary rc != MIGRATEPAGE_SUCCESS check in 'else' case Miaohe Lin
@ 2021-03-23 10:23 ` David Hildenbrand
0 siblings, 0 replies; 15+ messages in thread
From: David Hildenbrand @ 2021-03-23 10:23 UTC (permalink / raw)
To: Miaohe Lin, akpm; +Cc: jglisse, shy828301, linux-kernel, linux-mm
On 20.03.21 10:36, Miaohe Lin wrote:
> It's guaranteed that in the 'else' case of the rc == MIGRATEPAGE_SUCCESS
> check, rc does not equal to MIGRATEPAGE_SUCCESS. Remove this unnecessary
> check.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
> mm/migrate.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index e4ca5ef508ea..20a3bf75270a 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1374,7 +1374,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
> out:
> if (rc == MIGRATEPAGE_SUCCESS)
> putback_active_hugepage(hpage);
> - else if (rc != -EAGAIN && rc != MIGRATEPAGE_SUCCESS)
> + else if (rc != -EAGAIN)
> list_move_tail(&hpage->lru, ret);
>
> /*
>
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 3/5] mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page()
2021-03-20 9:36 [PATCH 0/5] Cleanup and fixup for mm/migrate.c Miaohe Lin
2021-03-20 9:36 ` [PATCH 1/5] mm/migrate.c: remove unnecessary VM_BUG_ON_PAGE on putback_movable_page() Miaohe Lin
2021-03-20 9:36 ` [PATCH 2/5] mm/migrate.c: remove unnecessary rc != MIGRATEPAGE_SUCCESS check in 'else' case Miaohe Lin
@ 2021-03-20 9:36 ` Miaohe Lin
2021-03-23 10:26 ` David Hildenbrand
2021-03-20 9:37 ` [PATCH 4/5] mm/migrate.c: use helper migrate_vma_collect_skip() in migrate_vma_collect_hole() Miaohe Lin
2021-03-20 9:37 ` [PATCH 5/5] mm/migrate.c: fix potential deadlock in NUMA balancing shared exec THP case Miaohe Lin
4 siblings, 1 reply; 15+ messages in thread
From: Miaohe Lin @ 2021-03-20 9:36 UTC (permalink / raw)
To: akpm; +Cc: jglisse, shy828301, linux-kernel, linux-mm, linmiaohe
If the zone device page does not belong to un-addressable device memory,
the variable entry will be uninitialized and lead to indeterminate pte
entry ultimately. Fix this unexpectant case and warn about it.
Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache coherent with CPU")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/migrate.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/mm/migrate.c b/mm/migrate.c
index 20a3bf75270a..271081b014cb 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2972,6 +2972,13 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
swp_entry = make_device_private_entry(page, vma->vm_flags & VM_WRITE);
entry = swp_entry_to_pte(swp_entry);
+ } else {
+ /*
+ * For now we only support migrating to un-addressable
+ * device memory.
+ */
+ WARN_ON(1);
+ goto abort;
}
} else {
entry = mk_pte(page, vma->vm_page_prot);
--
2.19.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 3/5] mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page()
2021-03-20 9:36 ` [PATCH 3/5] mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page() Miaohe Lin
@ 2021-03-23 10:26 ` David Hildenbrand
2021-03-23 11:07 ` Alistair Popple
2021-03-23 11:24 ` Miaohe Lin
0 siblings, 2 replies; 15+ messages in thread
From: David Hildenbrand @ 2021-03-23 10:26 UTC (permalink / raw)
To: Miaohe Lin, akpm; +Cc: jglisse, shy828301, linux-kernel, linux-mm
On 20.03.21 10:36, Miaohe Lin wrote:
> If the zone device page does not belong to un-addressable device memory,
> the variable entry will be uninitialized and lead to indeterminate pte
> entry ultimately. Fix this unexpectant case and warn about it.
s/unexpectant/unexpected/
>
> Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache coherent with CPU")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
> mm/migrate.c | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 20a3bf75270a..271081b014cb 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2972,6 +2972,13 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
>
> swp_entry = make_device_private_entry(page, vma->vm_flags & VM_WRITE);
> entry = swp_entry_to_pte(swp_entry);
> + } else {
> + /*
> + * For now we only support migrating to un-addressable
> + * device memory.
> + */
> + WARN_ON(1);
> + goto abort;
Fix it by crashing the kernel with panic_on_warn? :)
If this case can actual happen, than no WARN_ON() - rather a
pr_warn_once(). If this case cannot happen, why do we even care (it's
not a fix then)?
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/5] mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page()
2021-03-23 10:26 ` David Hildenbrand
@ 2021-03-23 11:07 ` Alistair Popple
2021-03-23 11:28 ` Miaohe Lin
2021-03-23 11:24 ` Miaohe Lin
1 sibling, 1 reply; 15+ messages in thread
From: Alistair Popple @ 2021-03-23 11:07 UTC (permalink / raw)
To: linux-kernel
Cc: David Hildenbrand, Miaohe Lin, akpm, jglisse, shy828301, linux-mm
On Tuesday, 23 March 2021 9:26:43 PM AEDT David Hildenbrand wrote:
> On 20.03.21 10:36, Miaohe Lin wrote:
> > If the zone device page does not belong to un-addressable device memory,
> > the variable entry will be uninitialized and lead to indeterminate pte
> > entry ultimately. Fix this unexpectant case and warn about it.
>
> s/unexpectant/unexpected/
>
> >
> > Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache
coherent with CPU")
> > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> > ---
> > mm/migrate.c | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 20a3bf75270a..271081b014cb 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -2972,6 +2972,13 @@ static void migrate_vma_insert_page(struct
migrate_vma *migrate,
> >
> > swp_entry = make_device_private_entry(page, vma->vm_flags &
VM_WRITE);
> > entry = swp_entry_to_pte(swp_entry);
> > + } else {
> > + /*
> > + * For now we only support migrating to un-addressable
> > + * device memory.
> > + */
> > + WARN_ON(1);
> > + goto abort;
>
> Fix it by crashing the kernel with panic_on_warn? :)
>
> If this case can actual happen, than no WARN_ON() - rather a
> pr_warn_once(). If this case cannot happen, why do we even care (it's
> not a fix then)?
There is also already a check for this case in migrate_vma_pages(). The
problem is it happens after the call to migrate_vma_insert_page(). I wonder if
instead it would be better just to move the existing check to before that
call?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/5] mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page()
2021-03-23 11:07 ` Alistair Popple
@ 2021-03-23 11:28 ` Miaohe Lin
2021-03-23 12:15 ` Miaohe Lin
0 siblings, 1 reply; 15+ messages in thread
From: Miaohe Lin @ 2021-03-23 11:28 UTC (permalink / raw)
To: Alistair Popple
Cc: David Hildenbrand, akpm, jglisse, shy828301, linux-mm, linux-kernel
On 2021/3/23 19:07, Alistair Popple wrote:
> On Tuesday, 23 March 2021 9:26:43 PM AEDT David Hildenbrand wrote:
>> On 20.03.21 10:36, Miaohe Lin wrote:
>>> If the zone device page does not belong to un-addressable device memory,
>>> the variable entry will be uninitialized and lead to indeterminate pte
>>> entry ultimately. Fix this unexpectant case and warn about it.
>>
>> s/unexpectant/unexpected/
>>
>>>
>>> Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache
> coherent with CPU")
>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>> ---
>>> mm/migrate.c | 7 +++++++
>>> 1 file changed, 7 insertions(+)
>>>
>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>> index 20a3bf75270a..271081b014cb 100644
>>> --- a/mm/migrate.c
>>> +++ b/mm/migrate.c
>>> @@ -2972,6 +2972,13 @@ static void migrate_vma_insert_page(struct
> migrate_vma *migrate,
>>>
>>> swp_entry = make_device_private_entry(page, vma->vm_flags &
> VM_WRITE);
>>> entry = swp_entry_to_pte(swp_entry);
>>> + } else {
>>> + /*
>>> + * For now we only support migrating to un-addressable
>>> + * device memory.
>>> + */
>>> + WARN_ON(1);
>>> + goto abort;
>>
>> Fix it by crashing the kernel with panic_on_warn? :)
>>
>> If this case can actual happen, than no WARN_ON() - rather a
>> pr_warn_once(). If this case cannot happen, why do we even care (it's
>> not a fix then)?
>
> There is also already a check for this case in migrate_vma_pages(). The
> problem is it happens after the call to migrate_vma_insert_page(). I wonder if
> instead it would be better just to move the existing check to before that
> call?
>
Yes, sounds good! Many thanks for your advice! :)
> >
> .
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/5] mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page()
2021-03-23 11:28 ` Miaohe Lin
@ 2021-03-23 12:15 ` Miaohe Lin
0 siblings, 0 replies; 15+ messages in thread
From: Miaohe Lin @ 2021-03-23 12:15 UTC (permalink / raw)
To: Alistair Popple
Cc: David Hildenbrand, akpm, jglisse, shy828301, linux-mm, linux-kernel
On 2021/3/23 19:28, Miaohe Lin wrote:
> On 2021/3/23 19:07, Alistair Popple wrote:
>> On Tuesday, 23 March 2021 9:26:43 PM AEDT David Hildenbrand wrote:
>>> On 20.03.21 10:36, Miaohe Lin wrote:
>>>> If the zone device page does not belong to un-addressable device memory,
>>>> the variable entry will be uninitialized and lead to indeterminate pte
>>>> entry ultimately. Fix this unexpectant case and warn about it.
>>>
>>> s/unexpectant/unexpected/
>>>
>>>>
>>>> Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache
>> coherent with CPU")
>>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>>> ---
>>>> mm/migrate.c | 7 +++++++
>>>> 1 file changed, 7 insertions(+)
>>>>
>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>> index 20a3bf75270a..271081b014cb 100644
>>>> --- a/mm/migrate.c
>>>> +++ b/mm/migrate.c
>>>> @@ -2972,6 +2972,13 @@ static void migrate_vma_insert_page(struct
>> migrate_vma *migrate,
>>>>
>>>> swp_entry = make_device_private_entry(page, vma->vm_flags &
>> VM_WRITE);
>>>> entry = swp_entry_to_pte(swp_entry);
>>>> + } else {
>>>> + /*
>>>> + * For now we only support migrating to un-addressable
>>>> + * device memory.
>>>> + */
>>>> + WARN_ON(1);
>>>> + goto abort;
>>>
>>> Fix it by crashing the kernel with panic_on_warn? :)
>>>
>>> If this case can actual happen, than no WARN_ON() - rather a
>>> pr_warn_once(). If this case cannot happen, why do we even care (it's
>>> not a fix then)?
>>
>> There is also already a check for this case in migrate_vma_pages(). The
>> problem is it happens after the call to migrate_vma_insert_page(). I wonder if
>> instead it would be better just to move the existing check to before that
>> call?
>>
>
While look at this more closely, move the existing check to before migrate_vma_insert_page() could
potentially change the current mmu_notifier semantics against anonymous zero page. :(
So check zone device page inside migrate_vma_insert_page() would be more acceptable.
Many thanks.
> Yes, sounds good! Many thanks for your advice! :)
>
>>>
>> .
>>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 3/5] mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page()
2021-03-23 10:26 ` David Hildenbrand
2021-03-23 11:07 ` Alistair Popple
@ 2021-03-23 11:24 ` Miaohe Lin
1 sibling, 0 replies; 15+ messages in thread
From: Miaohe Lin @ 2021-03-23 11:24 UTC (permalink / raw)
To: David Hildenbrand, akpm; +Cc: jglisse, shy828301, linux-kernel, linux-mm
Hi:
On 2021/3/23 18:26, David Hildenbrand wrote:
> On 20.03.21 10:36, Miaohe Lin wrote:
>> If the zone device page does not belong to un-addressable device memory,
>> the variable entry will be uninitialized and lead to indeterminate pte
>> entry ultimately. Fix this unexpectant case and warn about it.
>
> s/unexpectant/unexpected/
>
>>
>> Fixes: df6ad69838fc ("mm/device-public-memory: device memory cache coherent with CPU")
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>> mm/migrate.c | 7 +++++++
>> 1 file changed, 7 insertions(+)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 20a3bf75270a..271081b014cb 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -2972,6 +2972,13 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
>> swp_entry = make_device_private_entry(page, vma->vm_flags & VM_WRITE);
>> entry = swp_entry_to_pte(swp_entry);
>> + } else {
>> + /*
>> + * For now we only support migrating to un-addressable
>> + * device memory.
>> + */
>> + WARN_ON(1);
>> + goto abort;
>
> Fix it by crashing the kernel with panic_on_warn? :)
>
Sorry, my bad. :(
> If this case can actual happen, than no WARN_ON() - rather a pr_warn_once(). If this case cannot happen, why do we even care (it's not a fix then)?
Yep, this case can actual happen. Many thanks for providing alternative pr_warn_once().
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 4/5] mm/migrate.c: use helper migrate_vma_collect_skip() in migrate_vma_collect_hole()
2021-03-20 9:36 [PATCH 0/5] Cleanup and fixup for mm/migrate.c Miaohe Lin
` (2 preceding siblings ...)
2021-03-20 9:36 ` [PATCH 3/5] mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page() Miaohe Lin
@ 2021-03-20 9:37 ` Miaohe Lin
2021-03-23 10:29 ` David Hildenbrand
2021-03-20 9:37 ` [PATCH 5/5] mm/migrate.c: fix potential deadlock in NUMA balancing shared exec THP case Miaohe Lin
4 siblings, 1 reply; 15+ messages in thread
From: Miaohe Lin @ 2021-03-20 9:37 UTC (permalink / raw)
To: akpm; +Cc: jglisse, shy828301, linux-kernel, linux-mm, linmiaohe
It's more recommended to use helper function migrate_vma_collect_skip() to
skip the unexpected case and it also helps remove some duplicated codes.
Move migrate_vma_collect_skip() above migrate_vma_collect_hole() to avoid
compiler warning.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/migrate.c | 28 +++++++++++-----------------
1 file changed, 11 insertions(+), 17 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 271081b014cb..3e169b72d7b2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2315,44 +2315,38 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
#endif /* CONFIG_NUMA */
#ifdef CONFIG_DEVICE_PRIVATE
-static int migrate_vma_collect_hole(unsigned long start,
+static int migrate_vma_collect_skip(unsigned long start,
unsigned long end,
- __always_unused int depth,
struct mm_walk *walk)
{
struct migrate_vma *migrate = walk->private;
unsigned long addr;
- /* Only allow populating anonymous memory. */
- if (!vma_is_anonymous(walk->vma)) {
- for (addr = start; addr < end; addr += PAGE_SIZE) {
- migrate->src[migrate->npages] = 0;
- migrate->dst[migrate->npages] = 0;
- migrate->npages++;
- }
- return 0;
- }
-
for (addr = start; addr < end; addr += PAGE_SIZE) {
- migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
migrate->dst[migrate->npages] = 0;
- migrate->npages++;
- migrate->cpages++;
+ migrate->src[migrate->npages++] = 0;
}
return 0;
}
-static int migrate_vma_collect_skip(unsigned long start,
+static int migrate_vma_collect_hole(unsigned long start,
unsigned long end,
+ __always_unused int depth,
struct mm_walk *walk)
{
struct migrate_vma *migrate = walk->private;
unsigned long addr;
+ /* Only allow populating anonymous memory. */
+ if (!vma_is_anonymous(walk->vma))
+ return migrate_vma_collect_skip(start, end, walk);
+
for (addr = start; addr < end; addr += PAGE_SIZE) {
+ migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
migrate->dst[migrate->npages] = 0;
- migrate->src[migrate->npages++] = 0;
+ migrate->npages++;
+ migrate->cpages++;
}
return 0;
--
2.19.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 4/5] mm/migrate.c: use helper migrate_vma_collect_skip() in migrate_vma_collect_hole()
2021-03-20 9:37 ` [PATCH 4/5] mm/migrate.c: use helper migrate_vma_collect_skip() in migrate_vma_collect_hole() Miaohe Lin
@ 2021-03-23 10:29 ` David Hildenbrand
0 siblings, 0 replies; 15+ messages in thread
From: David Hildenbrand @ 2021-03-23 10:29 UTC (permalink / raw)
To: Miaohe Lin, akpm; +Cc: jglisse, shy828301, linux-kernel, linux-mm
On 20.03.21 10:37, Miaohe Lin wrote:
> It's more recommended to use helper function migrate_vma_collect_skip() to
> skip the unexpected case and it also helps remove some duplicated codes.
> Move migrate_vma_collect_skip() above migrate_vma_collect_hole() to avoid
> compiler warning.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
> mm/migrate.c | 28 +++++++++++-----------------
> 1 file changed, 11 insertions(+), 17 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 271081b014cb..3e169b72d7b2 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2315,44 +2315,38 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
> #endif /* CONFIG_NUMA */
>
> #ifdef CONFIG_DEVICE_PRIVATE
> -static int migrate_vma_collect_hole(unsigned long start,
> +static int migrate_vma_collect_skip(unsigned long start,
> unsigned long end,
> - __always_unused int depth,
> struct mm_walk *walk)
> {
> struct migrate_vma *migrate = walk->private;
> unsigned long addr;
>
> - /* Only allow populating anonymous memory. */
> - if (!vma_is_anonymous(walk->vma)) {
> - for (addr = start; addr < end; addr += PAGE_SIZE) {
> - migrate->src[migrate->npages] = 0;
> - migrate->dst[migrate->npages] = 0;
> - migrate->npages++;
> - }
> - return 0;
> - }
> -
> for (addr = start; addr < end; addr += PAGE_SIZE) {
> - migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
> migrate->dst[migrate->npages] = 0;
> - migrate->npages++;
> - migrate->cpages++;
> + migrate->src[migrate->npages++] = 0;
> }
>
> return 0;
> }
>
> -static int migrate_vma_collect_skip(unsigned long start,
> +static int migrate_vma_collect_hole(unsigned long start,
> unsigned long end,
> + __always_unused int depth,
> struct mm_walk *walk)
> {
> struct migrate_vma *migrate = walk->private;
> unsigned long addr;
>
> + /* Only allow populating anonymous memory. */
> + if (!vma_is_anonymous(walk->vma))
> + return migrate_vma_collect_skip(start, end, walk);
> +
> for (addr = start; addr < end; addr += PAGE_SIZE) {
> + migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
> migrate->dst[migrate->npages] = 0;
> - migrate->src[migrate->npages++] = 0;
> + migrate->npages++;
> + migrate->cpages++;
> }
>
> return 0;
>
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 5/5] mm/migrate.c: fix potential deadlock in NUMA balancing shared exec THP case
2021-03-20 9:36 [PATCH 0/5] Cleanup and fixup for mm/migrate.c Miaohe Lin
` (3 preceding siblings ...)
2021-03-20 9:37 ` [PATCH 4/5] mm/migrate.c: use helper migrate_vma_collect_skip() in migrate_vma_collect_hole() Miaohe Lin
@ 2021-03-20 9:37 ` Miaohe Lin
4 siblings, 0 replies; 15+ messages in thread
From: Miaohe Lin @ 2021-03-20 9:37 UTC (permalink / raw)
To: akpm; +Cc: jglisse, shy828301, linux-kernel, linux-mm, linmiaohe
Since commit c77c5cbafe54 ("mm: migrate: skip shared exec THP for NUMA
balancing"), the NUMA balancing would skip shared exec transhuge page.
But this enhancement is not suitable for transhuge page. Because it's
required that page_mapcount() must be 1 due to no migration pte dance
is done here. On the other hand, the shared exec transhuge page will
leave the migrate_misplaced_page() with pte entry untouched and page
locked. Thus pagefault for NUMA will be triggered again and deadlock
occurs when we start waiting for the page lock held by ourselves.
Fixes: c77c5cbafe54 ("mm: migrate: skip shared exec THP for NUMA balancing")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/migrate.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 3e169b72d7b2..67a941c52b6d 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2192,9 +2192,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
int page_lru = page_is_file_lru(page);
unsigned long start = address & HPAGE_PMD_MASK;
- if (is_shared_exec_page(vma, page))
- goto out;
-
new_page = alloc_pages_node(node,
(GFP_TRANSHUGE_LIGHT | __GFP_THISNODE),
HPAGE_PMD_ORDER);
@@ -2306,7 +2303,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
out_unlock:
unlock_page(page);
-out:
put_page(page);
return 0;
}
--
2.19.1
^ permalink raw reply related [flat|nested] 15+ messages in thread