* [PATCH] mm/mempolicy: clean up the code logic in queue_pages_pte_range
@ 2022-04-19 12:22 Miaohe Lin
2022-04-21 0:44 ` Yang Shi
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Miaohe Lin @ 2022-04-19 12:22 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, linux-kernel, linmiaohe
Since commit e5947d23edd8 ("mm: mempolicy: don't have to split pmd for
huge zero page"), THP is never splited in queue_pages_pmd. Thus 2 is
never returned now. We can remove such unnecessary ret != 2 check and
clean up the relevant comment. Minor improvements in readability.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/mempolicy.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 75a8b247f631..3934476fb708 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -441,12 +441,11 @@ static inline bool queue_pages_required(struct page *page,
}
/*
- * queue_pages_pmd() has four possible return values:
+ * queue_pages_pmd() has three possible return values:
* 0 - pages are placed on the right node or queued successfully, or
* special page is met, i.e. huge zero page.
* 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
* specified.
- * 2 - THP was split.
* -EIO - is migration entry or only MPOL_MF_STRICT was specified and an
* existing page was already on a node that does not follow the
* policy.
@@ -508,18 +507,13 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
struct page *page;
struct queue_pages *qp = walk->private;
unsigned long flags = qp->flags;
- int ret;
bool has_unmovable = false;
pte_t *pte, *mapped_pte;
spinlock_t *ptl;
ptl = pmd_trans_huge_lock(pmd, vma);
- if (ptl) {
- ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
- if (ret != 2)
- return ret;
- }
- /* THP was split, fall through to pte walk */
+ if (ptl)
+ return queue_pages_pmd(pmd, ptl, addr, end, walk);
if (pmd_trans_unstable(pmd))
return 0;
--
2.23.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/mempolicy: clean up the code logic in queue_pages_pte_range
2022-04-19 12:22 [PATCH] mm/mempolicy: clean up the code logic in queue_pages_pte_range Miaohe Lin
@ 2022-04-21 0:44 ` Yang Shi
2022-04-22 11:48 ` Michal Hocko
2022-04-24 19:23 ` David Rientjes
2 siblings, 0 replies; 4+ messages in thread
From: Yang Shi @ 2022-04-21 0:44 UTC (permalink / raw)
To: Miaohe Lin; +Cc: Andrew Morton, Linux MM, Linux Kernel Mailing List
On Tue, Apr 19, 2022 at 5:22 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> Since commit e5947d23edd8 ("mm: mempolicy: don't have to split pmd for
> huge zero page"), THP is never splited in queue_pages_pmd. Thus 2 is
> never returned now. We can remove such unnecessary ret != 2 check and
> clean up the relevant comment. Minor improvements in readability.
Nice catch. Yeah, it was missed when I worked on that commit.
Reviewed-by: Yang Shi <shy828301@gmail.com>
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
> mm/mempolicy.c | 12 +++---------
> 1 file changed, 3 insertions(+), 9 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 75a8b247f631..3934476fb708 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -441,12 +441,11 @@ static inline bool queue_pages_required(struct page *page,
> }
>
> /*
> - * queue_pages_pmd() has four possible return values:
> + * queue_pages_pmd() has three possible return values:
> * 0 - pages are placed on the right node or queued successfully, or
> * special page is met, i.e. huge zero page.
> * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
> * specified.
> - * 2 - THP was split.
> * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an
> * existing page was already on a node that does not follow the
> * policy.
> @@ -508,18 +507,13 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
> struct page *page;
> struct queue_pages *qp = walk->private;
> unsigned long flags = qp->flags;
> - int ret;
> bool has_unmovable = false;
> pte_t *pte, *mapped_pte;
> spinlock_t *ptl;
>
> ptl = pmd_trans_huge_lock(pmd, vma);
> - if (ptl) {
> - ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
> - if (ret != 2)
> - return ret;
> - }
> - /* THP was split, fall through to pte walk */
> + if (ptl)
> + return queue_pages_pmd(pmd, ptl, addr, end, walk);
>
> if (pmd_trans_unstable(pmd))
> return 0;
> --
> 2.23.0
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/mempolicy: clean up the code logic in queue_pages_pte_range
2022-04-19 12:22 [PATCH] mm/mempolicy: clean up the code logic in queue_pages_pte_range Miaohe Lin
2022-04-21 0:44 ` Yang Shi
@ 2022-04-22 11:48 ` Michal Hocko
2022-04-24 19:23 ` David Rientjes
2 siblings, 0 replies; 4+ messages in thread
From: Michal Hocko @ 2022-04-22 11:48 UTC (permalink / raw)
To: Miaohe Lin; +Cc: akpm, linux-mm, linux-kernel
On Tue 19-04-22 20:22:34, Miaohe Lin wrote:
> Since commit e5947d23edd8 ("mm: mempolicy: don't have to split pmd for
> huge zero page"), THP is never splited in queue_pages_pmd. Thus 2 is
> never returned now. We can remove such unnecessary ret != 2 check and
> clean up the relevant comment. Minor improvements in readability.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Thanks!
> ---
> mm/mempolicy.c | 12 +++---------
> 1 file changed, 3 insertions(+), 9 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 75a8b247f631..3934476fb708 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -441,12 +441,11 @@ static inline bool queue_pages_required(struct page *page,
> }
>
> /*
> - * queue_pages_pmd() has four possible return values:
> + * queue_pages_pmd() has three possible return values:
> * 0 - pages are placed on the right node or queued successfully, or
> * special page is met, i.e. huge zero page.
> * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
> * specified.
> - * 2 - THP was split.
> * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an
> * existing page was already on a node that does not follow the
> * policy.
> @@ -508,18 +507,13 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
> struct page *page;
> struct queue_pages *qp = walk->private;
> unsigned long flags = qp->flags;
> - int ret;
> bool has_unmovable = false;
> pte_t *pte, *mapped_pte;
> spinlock_t *ptl;
>
> ptl = pmd_trans_huge_lock(pmd, vma);
> - if (ptl) {
> - ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
> - if (ret != 2)
> - return ret;
> - }
> - /* THP was split, fall through to pte walk */
> + if (ptl)
> + return queue_pages_pmd(pmd, ptl, addr, end, walk);
>
> if (pmd_trans_unstable(pmd))
> return 0;
> --
> 2.23.0
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/mempolicy: clean up the code logic in queue_pages_pte_range
2022-04-19 12:22 [PATCH] mm/mempolicy: clean up the code logic in queue_pages_pte_range Miaohe Lin
2022-04-21 0:44 ` Yang Shi
2022-04-22 11:48 ` Michal Hocko
@ 2022-04-24 19:23 ` David Rientjes
2 siblings, 0 replies; 4+ messages in thread
From: David Rientjes @ 2022-04-24 19:23 UTC (permalink / raw)
To: Miaohe Lin; +Cc: akpm, linux-mm, linux-kernel
On Tue, 19 Apr 2022, Miaohe Lin wrote:
> Since commit e5947d23edd8 ("mm: mempolicy: don't have to split pmd for
> huge zero page"), THP is never splited in queue_pages_pmd. Thus 2 is
> never returned now. We can remove such unnecessary ret != 2 check and
> clean up the relevant comment. Minor improvements in readability.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Acked-by: David Rientjes <rientjes@google.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2022-04-24 19:23 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-19 12:22 [PATCH] mm/mempolicy: clean up the code logic in queue_pages_pte_range Miaohe Lin
2022-04-21 0:44 ` Yang Shi
2022-04-22 11:48 ` Michal Hocko
2022-04-24 19:23 ` David Rientjes
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.