* [PATCH 1/2] mm/mlock: not handle NULL vma specially
@ 2022-05-04 0:39 Wei Yang
2022-05-04 0:39 ` [PATCH 2/2] mm/mlock: start is always smaller then vm_end Wei Yang
2022-05-07 21:03 ` [PATCH 1/2] mm/mlock: not handle NULL vma specially Andrew Morton
0 siblings, 2 replies; 5+ messages in thread
From: Wei Yang @ 2022-05-04 0:39 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, Wei Yang
If we can't find a proper vma, the loop would terminate as expected.
It's not necessary to handle it specially.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
mm/mlock.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/mm/mlock.c b/mm/mlock.c
index efd2dd2943de..0b7cf7d60922 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -504,11 +504,7 @@ static unsigned long count_mm_mlocked_page_nr(struct mm_struct *mm,
if (mm == NULL)
mm = current->mm;
- vma = find_vma(mm, start);
- if (vma == NULL)
- return 0;
-
- for (; vma ; vma = vma->vm_next) {
+ for (vma = find_vma(mm, start); vma ; vma = vma->vm_next) {
if (start >= vma->vm_end)
continue;
if (start + len <= vma->vm_start)
--
2.33.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] mm/mlock: start is always smaller then vm_end
2022-05-04 0:39 [PATCH 1/2] mm/mlock: not handle NULL vma specially Wei Yang
@ 2022-05-04 0:39 ` Wei Yang
2022-05-07 21:03 ` [PATCH 1/2] mm/mlock: not handle NULL vma specially Andrew Morton
1 sibling, 0 replies; 5+ messages in thread
From: Wei Yang @ 2022-05-04 0:39 UTC (permalink / raw)
To: akpm; +Cc: linux-mm, Wei Yang
We would meet this situation in the loop since:
* find_vma() would return a vma meets vm_end > start.
* the following vma->vm_end is growing
So this situation never happens.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
mm/mlock.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/mm/mlock.c b/mm/mlock.c
index 0b7cf7d60922..eab3b0a5b569 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -505,8 +505,6 @@ static unsigned long count_mm_mlocked_page_nr(struct mm_struct *mm,
mm = current->mm;
for (vma = find_vma(mm, start); vma ; vma = vma->vm_next) {
- if (start >= vma->vm_end)
- continue;
if (start + len <= vma->vm_start)
break;
if (vma->vm_flags & VM_LOCKED) {
--
2.33.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] mm/mlock: not handle NULL vma specially
2022-05-04 0:39 [PATCH 1/2] mm/mlock: not handle NULL vma specially Wei Yang
2022-05-04 0:39 ` [PATCH 2/2] mm/mlock: start is always smaller then vm_end Wei Yang
@ 2022-05-07 21:03 ` Andrew Morton
2022-05-07 21:56 ` Wei Yang
1 sibling, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2022-05-07 21:03 UTC (permalink / raw)
To: Wei Yang; +Cc: linux-mm
On Wed, 4 May 2022 00:39:57 +0000 Wei Yang <richard.weiyang@gmail.com> wrote:
> If we can't find a proper vma, the loop would terminate as expected.
>
> It's not necessary to handle it specially.
>
> ...
>
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -504,11 +504,7 @@ static unsigned long count_mm_mlocked_page_nr(struct mm_struct *mm,
> if (mm == NULL)
> mm = current->mm;
>
> - vma = find_vma(mm, start);
> - if (vma == NULL)
> - return 0;
> -
> - for (; vma ; vma = vma->vm_next) {
> + for (vma = find_vma(mm, start); vma ; vma = vma->vm_next) {
> if (start >= vma->vm_end)
> continue;
> if (start + len <= vma->vm_start)
The mapletree patches mangle this code a lot.
Please take a look at linux-next or the mm-unstabe branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm early to mid next
week, see if you see anything which should be addressed.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] mm/mlock: not handle NULL vma specially
2022-05-07 21:03 ` [PATCH 1/2] mm/mlock: not handle NULL vma specially Andrew Morton
@ 2022-05-07 21:56 ` Wei Yang
2022-05-07 22:44 ` Andrew Morton
0 siblings, 1 reply; 5+ messages in thread
From: Wei Yang @ 2022-05-07 21:56 UTC (permalink / raw)
To: Andrew Morton; +Cc: Linux-MM
On Sun, May 8, 2022 at 5:03 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Wed, 4 May 2022 00:39:57 +0000 Wei Yang <richard.weiyang@gmail.com> wrote:
>
> > If we can't find a proper vma, the loop would terminate as expected.
> >
> > It's not necessary to handle it specially.
> >
> > ...
> >
> > --- a/mm/mlock.c
> > +++ b/mm/mlock.c
> > @@ -504,11 +504,7 @@ static unsigned long count_mm_mlocked_page_nr(struct mm_struct *mm,
> > if (mm == NULL)
> > mm = current->mm;
> >
> > - vma = find_vma(mm, start);
> > - if (vma == NULL)
> > - return 0;
> > -
> > - for (; vma ; vma = vma->vm_next) {
> > + for (vma = find_vma(mm, start); vma ; vma = vma->vm_next) {
> > if (start >= vma->vm_end)
> > continue;
> > if (start + len <= vma->vm_start)
>
> The mapletree patches mangle this code a lot.
>
> Please take a look at linux-next or the mm-unstabe branch at
> git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm early to mid next
> week, see if you see anything which should be addressed.
>
I took a look at mm-unstabe branch with last commit
2b58b3f33ba2 mm/shmem: convert shmem_swapin_page() to shmem_swapin_folio()
Function count_mm_mlocked_page_nr() looks not changed.
Do I need to rebase on top of it?
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] mm/mlock: not handle NULL vma specially
2022-05-07 21:56 ` Wei Yang
@ 2022-05-07 22:44 ` Andrew Morton
0 siblings, 0 replies; 5+ messages in thread
From: Andrew Morton @ 2022-05-07 22:44 UTC (permalink / raw)
To: Wei Yang; +Cc: Linux-MM
On Sun, 8 May 2022 05:56:15 +0800 Wei Yang <richard.weiyang@gmail.com> wrote:
> > > - vma = find_vma(mm, start);
> > > - if (vma == NULL)
> > > - return 0;
> > > -
> > > - for (; vma ; vma = vma->vm_next) {
> > > + for (vma = find_vma(mm, start); vma ; vma = vma->vm_next) {
> > > if (start >= vma->vm_end)
> > > continue;
> > > if (start + len <= vma->vm_start)
> >
> > The mapletree patches mangle this code a lot.
> >
> > Please take a look at linux-next or the mm-unstabe branch at
> > git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm early to mid next
> > week, see if you see anything which should be addressed.
> >
>
> I took a look at mm-unstabe branch with last commit
>
> 2b58b3f33ba2 mm/shmem: convert shmem_swapin_page() to shmem_swapin_folio()
>
It isn't early to mid next week yet ;)
> Function count_mm_mlocked_page_nr() looks not changed.
>
> Do I need to rebase on top of it?
static unsigned long count_mm_mlocked_page_nr(struct mm_struct *mm,
unsigned long start, size_t len)
{
struct vm_area_struct *vma;
unsigned long count = 0;
unsigned long end;
VMA_ITERATOR(vmi, mm, start);
if (mm == NULL)
mm = current->mm;
/* Don't overflow past ULONG_MAX */
if (unlikely(ULONG_MAX - len < start))
end = ULONG_MAX;
else
end = start + len;
for_each_vma_range(vmi, vma, end) {
if (vma->vm_flags & VM_LOCKED) {
if (start > vma->vm_start)
count -= (start - vma->vm_start);
if (end < vma->vm_end) {
count += end - vma->vm_start;
break;
}
count += vma->vm_end - vma->vm_start;
}
}
return count >> PAGE_SHIFT;
}
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-05-07 22:44 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-04 0:39 [PATCH 1/2] mm/mlock: not handle NULL vma specially Wei Yang
2022-05-04 0:39 ` [PATCH 2/2] mm/mlock: start is always smaller then vm_end Wei Yang
2022-05-07 21:03 ` [PATCH 1/2] mm/mlock: not handle NULL vma specially Andrew Morton
2022-05-07 21:56 ` Wei Yang
2022-05-07 22:44 ` Andrew Morton
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.