* [PATCH] mm: replace is_zero_pfn with is_huge_zero_pmd for thp @ 2019-08-25 20:06 Yu Zhao 2019-08-26 13:18 ` Matthew Wilcox 2019-11-08 19:26 ` [PATCH v2] " Yu Zhao 0 siblings, 2 replies; 5+ messages in thread From: Yu Zhao @ 2019-08-25 20:06 UTC (permalink / raw) To: Andrew Morton Cc: Matthew Wilcox, Ralph Campbell, Jérôme Glisse, Will Deacon, Peter Zijlstra, Aneesh Kumar K . V, Dave Airlie, Thomas Hellstrom, Souptick Joarder, linux-mm, linux-kernel, Yu Zhao For hugely mapped thp, we use is_huge_zero_pmd() to check if it's zero page or not. We do fill ptes with my_zero_pfn() when we split zero thp pmd, but this is not what we have in vm_normal_page_pmd(). pmd_trans_huge_lock() makes sure of it. This is a trivial fix for /proc/pid/numa_maps, and AFAIK nobody complains about it. Signed-off-by: Yu Zhao <yuzhao@google.com> --- mm/memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index e2bb51b6242e..ea3c74855b23 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -654,7 +654,7 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, if (pmd_devmap(pmd)) return NULL; - if (is_zero_pfn(pfn)) + if (is_huge_zero_pmd(pmd)) return NULL; if (unlikely(pfn > highest_memmap_pfn)) return NULL; -- 2.23.0.187.g17f5b7556c-goog ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] mm: replace is_zero_pfn with is_huge_zero_pmd for thp 2019-08-25 20:06 [PATCH] mm: replace is_zero_pfn with is_huge_zero_pmd for thp Yu Zhao @ 2019-08-26 13:18 ` Matthew Wilcox 2019-08-26 15:09 ` Gerald Schaefer 2019-11-08 19:26 ` [PATCH v2] " Yu Zhao 1 sibling, 1 reply; 5+ messages in thread From: Matthew Wilcox @ 2019-08-26 13:18 UTC (permalink / raw) To: Yu Zhao Cc: Andrew Morton, Ralph Campbell, Jérôme Glisse, Will Deacon, Peter Zijlstra, Aneesh Kumar K . V, Dave Airlie, Thomas Hellstrom, Souptick Joarder, linux-mm, linux-kernel, Gerald Schaefer Why did you not cc Gerald who wrote the patch? You can't just run get_maintainers.pl and call it good. On Sun, Aug 25, 2019 at 02:06:21PM -0600, Yu Zhao wrote: > For hugely mapped thp, we use is_huge_zero_pmd() to check if it's > zero page or not. > > We do fill ptes with my_zero_pfn() when we split zero thp pmd, but > this is not what we have in vm_normal_page_pmd(). > pmd_trans_huge_lock() makes sure of it. > > This is a trivial fix for /proc/pid/numa_maps, and AFAIK nobody > complains about it. > > Signed-off-by: Yu Zhao <yuzhao@google.com> > --- > mm/memory.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/memory.c b/mm/memory.c > index e2bb51b6242e..ea3c74855b23 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -654,7 +654,7 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, > > if (pmd_devmap(pmd)) > return NULL; > - if (is_zero_pfn(pfn)) > + if (is_huge_zero_pmd(pmd)) > return NULL; > if (unlikely(pfn > highest_memmap_pfn)) > return NULL; > -- > 2.23.0.187.g17f5b7556c-goog > ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm: replace is_zero_pfn with is_huge_zero_pmd for thp 2019-08-26 13:18 ` Matthew Wilcox @ 2019-08-26 15:09 ` Gerald Schaefer 2019-09-04 20:54 ` Yu Zhao 0 siblings, 1 reply; 5+ messages in thread From: Gerald Schaefer @ 2019-08-26 15:09 UTC (permalink / raw) To: Matthew Wilcox Cc: Yu Zhao, Andrew Morton, Ralph Campbell, Jérôme Glisse, Will Deacon, Peter Zijlstra, Aneesh Kumar K . V, Dave Airlie, Thomas Hellstrom, Souptick Joarder, linux-mm, linux-kernel On Mon, 26 Aug 2019 06:18:58 -0700 Matthew Wilcox <willy@infradead.org> wrote: > Why did you not cc Gerald who wrote the patch? You can't just > run get_maintainers.pl and call it good. > > On Sun, Aug 25, 2019 at 02:06:21PM -0600, Yu Zhao wrote: > > For hugely mapped thp, we use is_huge_zero_pmd() to check if it's > > zero page or not. > > > > We do fill ptes with my_zero_pfn() when we split zero thp pmd, but > > this is not what we have in vm_normal_page_pmd(). > > pmd_trans_huge_lock() makes sure of it. > > > > This is a trivial fix for /proc/pid/numa_maps, and AFAIK nobody > > complains about it. > > > > Signed-off-by: Yu Zhao <yuzhao@google.com> > > --- > > mm/memory.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index e2bb51b6242e..ea3c74855b23 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -654,7 +654,7 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, > > > > if (pmd_devmap(pmd)) > > return NULL; > > - if (is_zero_pfn(pfn)) > > + if (is_huge_zero_pmd(pmd)) > > return NULL; > > if (unlikely(pfn > highest_memmap_pfn)) > > return NULL; > > -- > > 2.23.0.187.g17f5b7556c-goog > > Looks good to me. The "_pmd" versions for can_gather_numa_stats() and vm_normal_page() were introduced to avoid using pte_present/dirty() on pmds, which is not affected by this patch. In fact, for vm_normal_page_pmd() I basically copied most of the code from vm_normal_page(), including the is_zero_pfn(pfn) check, which does look wrong to me now. Using is_huge_zero_pmd() should be correct. Maybe the description could also mention the symptom of this bug? I would assume that it affects anon/dirty accounting in gather_pte_stats(), for huge mappings, if zero page mappings are not correctly recognized. Regards, Gerald ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] mm: replace is_zero_pfn with is_huge_zero_pmd for thp 2019-08-26 15:09 ` Gerald Schaefer @ 2019-09-04 20:54 ` Yu Zhao 0 siblings, 0 replies; 5+ messages in thread From: Yu Zhao @ 2019-09-04 20:54 UTC (permalink / raw) To: Gerald Schaefer Cc: Matthew Wilcox, Andrew Morton, Ralph Campbell, Jérôme Glisse, Will Deacon, Peter Zijlstra, Aneesh Kumar K . V, Dave Airlie, Thomas Hellstrom, Souptick Joarder, linux-mm, linux-kernel On Mon, Aug 26, 2019 at 05:09:34PM +0200, Gerald Schaefer wrote: > On Mon, 26 Aug 2019 06:18:58 -0700 > Matthew Wilcox <willy@infradead.org> wrote: > > > Why did you not cc Gerald who wrote the patch? You can't just > > run get_maintainers.pl and call it good. > > > > On Sun, Aug 25, 2019 at 02:06:21PM -0600, Yu Zhao wrote: > > > For hugely mapped thp, we use is_huge_zero_pmd() to check if it's > > > zero page or not. > > > > > > We do fill ptes with my_zero_pfn() when we split zero thp pmd, but > > > this is not what we have in vm_normal_page_pmd(). > > > pmd_trans_huge_lock() makes sure of it. > > > > > > This is a trivial fix for /proc/pid/numa_maps, and AFAIK nobody > > > complains about it. > > > > > > Signed-off-by: Yu Zhao <yuzhao@google.com> > > > --- > > > mm/memory.c | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > diff --git a/mm/memory.c b/mm/memory.c > > > index e2bb51b6242e..ea3c74855b23 100644 > > > --- a/mm/memory.c > > > +++ b/mm/memory.c > > > @@ -654,7 +654,7 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, > > > > > > if (pmd_devmap(pmd)) > > > return NULL; > > > - if (is_zero_pfn(pfn)) > > > + if (is_huge_zero_pmd(pmd)) > > > return NULL; > > > if (unlikely(pfn > highest_memmap_pfn)) > > > return NULL; > > > -- > > > 2.23.0.187.g17f5b7556c-goog > > > > > Looks good to me. The "_pmd" versions for can_gather_numa_stats() and > vm_normal_page() were introduced to avoid using pte_present/dirty() on > pmds, which is not affected by this patch. > > In fact, for vm_normal_page_pmd() I basically copied most of the code > from vm_normal_page(), including the is_zero_pfn(pfn) check, which does > look wrong to me now. Using is_huge_zero_pmd() should be correct. > > Maybe the description could also mention the symptom of this bug? > I would assume that it affects anon/dirty accounting in gather_pte_stats(), > for huge mappings, if zero page mappings are not correctly recognized. Hi, sorry for not copying you on the original email. I came across this while I was looking at the code. I'm not aware of any symptom. Thank you. ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v2] mm: replace is_zero_pfn with is_huge_zero_pmd for thp 2019-08-25 20:06 [PATCH] mm: replace is_zero_pfn with is_huge_zero_pmd for thp Yu Zhao 2019-08-26 13:18 ` Matthew Wilcox @ 2019-11-08 19:26 ` Yu Zhao 1 sibling, 0 replies; 5+ messages in thread From: Yu Zhao @ 2019-11-08 19:26 UTC (permalink / raw) To: Andrew Morton Cc: Matthew Wilcox, Ralph Campbell, Jérôme Glisse, Will Deacon, Peter Zijlstra, Aneesh Kumar K . V, Dave Airlie, Thomas Hellstrom, Souptick Joarder, Gerald Schaefer, linux-mm, linux-kernel, Yu Zhao For hugely mapped thp, we use is_huge_zero_pmd() to check if it's zero page or not. We do fill ptes with my_zero_pfn() when we split zero thp pmd, but this is not what we have in vm_normal_page_pmd() -- pmd_trans_huge_lock() makes sure of it. This is a trivial fix for /proc/pid/numa_maps, and AFAIK nobody complains about it. Gerald Schaefer asked: > Maybe the description could also mention the symptom of this bug? > I would assume that it affects anon/dirty accounting in gather_pte_stats(), > for huge mappings, if zero page mappings are not correctly recognized. I came across this while I was looking at the code, so I'm not aware of any symptom. Signed-off-by: Yu Zhao <yuzhao@google.com> --- mm/memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index b1ca51a079f2..cf209f84ce4a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -654,7 +654,7 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, if (pmd_devmap(pmd)) return NULL; - if (is_zero_pfn(pfn)) + if (is_huge_zero_pmd(pmd)) return NULL; if (unlikely(pfn > highest_memmap_pfn)) return NULL; -- 2.24.0.rc1.363.gb1bccd3e3d-goog ^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-11-08 19:26 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-08-25 20:06 [PATCH] mm: replace is_zero_pfn with is_huge_zero_pmd for thp Yu Zhao 2019-08-26 13:18 ` Matthew Wilcox 2019-08-26 15:09 ` Gerald Schaefer 2019-09-04 20:54 ` Yu Zhao 2019-11-08 19:26 ` [PATCH v2] " Yu Zhao
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).