* Re: [RFC V3] mm: change mm_advise_free to clear page dirty
@ 2015-03-03 3:25 Minchan Kim
2015-03-03 3:59 ` Wang, Yalin
2015-03-05 15:35 ` Michal Hocko
0 siblings, 2 replies; 8+ messages in thread
From: Minchan Kim @ 2015-03-03 3:25 UTC (permalink / raw)
To: Wang, Yalin
Cc: 'Michal Hocko', 'Andrew Morton',
'linux-kernel@vger.kernel.org',
'linux-mm@kvack.org', 'Rik van Riel',
'Johannes Weiner', 'Mel Gorman',
'Shaohua Li',
Hugh Dickins, Cyrill Gorcunov
Could you separte this patch in this patchset thread?
It's tackling differnt problem.
As well, I had a question to previous thread about why shared page
has a problem now but you didn't answer and send a new patchset.
It makes reviewers/maintainer time waste/confuse. Please, don't
hurry to send a code. Before that, resolve reviewers's comments.
On Tue, Mar 03, 2015 at 10:06:40AM +0800, Wang, Yalin wrote:
> This patch add ClearPageDirty() to clear AnonPage dirty flag,
> if not clear page dirty for this anon page, the page will never be
> treated as freeable. We also make sure the shared AnonPage is not
> freeable, we implement it by dirty all copyed AnonPage pte,
> so that make sure the Anonpage will not become freeable, unless
> all process which shared this page call madvise_free syscall.
Please, spend more time to make description clear. I really doubt
who understand this description without code inspection. :(
Of course, I'm not a person to write description clear like native
, either but just I'm sure I spend a more time to write description
rather than coding, at least. :)
>
> Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com>
> ---
> mm/madvise.c | 16 +++++++++-------
> mm/memory.c | 12 ++++++++++--
> 2 files changed, 19 insertions(+), 9 deletions(-)
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 6d0fcb8..b61070d 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -297,23 +297,25 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> continue;
>
> page = vm_normal_page(vma, addr, ptent);
> - if (!page)
> + if (!page || !trylock_page(page))
> continue;
>
> if (PageSwapCache(page)) {
> - if (!trylock_page(page))
> - continue;
> -
> if (!try_to_free_swap(page)) {
> unlock_page(page);
> continue;
> }
> -
> - ClearPageDirty(page);
> - unlock_page(page);
> }
>
> /*
> + * we clear page dirty flag for AnonPage, no matter if this
> + * page is in swapcahce or not, AnonPage not in swapcache also set
> + * dirty flag sometimes, this happened when a AnonPage is removed
> + * from swapcahce by try_to_free_swap()
> + */
> + ClearPageDirty(page);
> + unlock_page(page);
> + /*
Parent:
ptrP = malloc();
*ptrP = 'a';
fork(); -> child process pte has dirty by your patch
..
memory pressure -> So, swapped out the page.
..
..
Child: var = *ptrP; assert(var =='a') -> So, swapin happens and child has pte_clean
parent: var = *ptrP; aasert(var == 'a') -> So, swapin happens and parent has pte_clean
..
..
Parent:
madvise_free -> remove PageDirty
So, both parent and child has pte_clean and !PageDirty, which
is target for VM to discard a page.
..
VM discard the page by memory pressure.
..
Child: var = *ptrP: assert(var == 'a'); <---- oops.
And blindly ClearPageDirty makes duplicates swap out.
> * Some of architecture(ex, PPC) don't update TLB
> * with set_pte_at and tlb_remove_tlb_entry so for
> * the portability, remap the pte with old|clean
> diff --git a/mm/memory.c b/mm/memory.c
> index 8068893..3d949b3 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -874,10 +874,18 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> if (page) {
> get_page(page);
> page_dup_rmap(page);
> - if (PageAnon(page))
> + if (PageAnon(page)) {
> + /*
> + * we dirty the copyed pte for anon page,
> + * this is useful for madvise_free_pte_range(),
> + * this can prevent shared anon page freed by madvise_free
> + * syscall
> + */
> + pte = pte_mkdirty(pte);
It made every MADV_FREE hinted page void. IOW, if a process called MADV_FREE
calls fork, VM cannot discard pages if child doesn't free pages or calls madvise_free.
Then, if parent calls madvise_free before fork, we couldn't free those pages.
IOW, you are ignoring below example.
parent:
ptr1 = malloc(len);
-> allocator calls mmap(len);
memset(ptr1, 'a', len);
free(ptr1);
-> allocator calls madvise_free(ptr1, len);
fork();
..
..
-> VM discard hinted pages
child:
ptr2 = malloc(len)
-> allocator reuses the chunk allocated from parent.
so, child will see zero pages from ptr2 but he doesn't write
anything so garbage|zero page anything is okay to him.
As well, you are adding new instructions in fork which is very frequent syscall
so I'd like to find another way to avoid adding instructions in such hot path.
I will send different patch. Please review it.
So, my suggestion is below. It always makes pte dirty so let's Cc
Cyrill to take care of softdirty and Hugh who is Mr.Swap.
>From 30c6d5b35a3dc7e451041183ce5efd6a6c42bf88 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@kernel.org>
Date: Tue, 3 Mar 2015 10:06:59 +0900
Subject: [RFC] mm: make every pte dirty on do_swap_page
Bascially, MADV_FREE relys on the pte dirty to decide whether
VM should discard or not. However, swapped-in page doesn't have
pte_dirty. Instead, it checks PageDirty and PageSwapCache for
such page because swapped-in page could live on swap cache or
set PageDirty when it is removed from swapcache so MADV_FREE
checks it and doesn't discard.
The problem in here is any anonymous page can have PageDirty if
it is removed from swapcache so that VM cannot parse those pages
as freeable even if we did madvise_free. Look at below example.
ptr = malloc();
memset(ptr);
..
heavy memory pressure -> swap-out all of pages
..
out of memory pressure
..
var = *ptr; -> swap-in page/remove the page from swapcache. so pte_clean
but SetPageDirty
madvise_free(ptr);
..
..
heavy memory pressure -> VM cannot discard the page by PageDirty.
PageDirty for anonymous page aims for avoiding duplicating
swapping out. In other words, if a page have swapped-in but
live swapcache(ie, !PageDirty), we could save swapout if the page
is selected as victim by VM in future because swap device have
kept previous swapped-out contents of the page.
So, rather than relying on the PG_dirty for working madvise_free,
pte_dirty is more straightforward.
Inherently, swapped-out page was pte_dirty so this patch restores
the dirtiness when swap-in fault happens and madvise_free doesn't
rely on the PageDirty.
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
mm/madvise.c | 1 -
mm/memory.c | 9 +++++++--
mm/rmap.c | 2 +-
mm/vmscan.c | 3 +--
4 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/mm/madvise.c b/mm/madvise.c
index 6d0fcb8..d64200e 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -309,7 +309,6 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
continue;
}
- ClearPageDirty(page);
unlock_page(page);
}
diff --git a/mm/memory.c b/mm/memory.c
index 8ae52c9..2f45e77 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2460,9 +2460,14 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
inc_mm_counter_fast(mm, MM_ANONPAGES);
dec_mm_counter_fast(mm, MM_SWAPENTS);
- pte = mk_pte(page, vma->vm_page_prot);
+
+ /*
+ * Every page swapped-out was pte_dirty so we makes pte dirty again.
+ * MADV_FREE relys on it.
+ */
+ pte = mk_pte(pte_mkdirty(page), vma->vm_page_prot);
if ((flags & FAULT_FLAG_WRITE) && reuse_swap_page(page)) {
- pte = maybe_mkwrite(pte_mkdirty(pte), vma);
+ pte = maybe_mkwrite(pte, vma);
flags &= ~FAULT_FLAG_WRITE;
ret |= VM_FAULT_WRITE;
exclusive = 1;
diff --git a/mm/rmap.c b/mm/rmap.c
index 47b3ba8..34c1d66 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1268,7 +1268,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
if (flags & TTU_FREE) {
VM_BUG_ON_PAGE(PageSwapCache(page), page);
- if (!dirty && !PageDirty(page)) {
+ if (!dirty) {
/* It's a freeable page by MADV_FREE */
dec_mm_counter(mm, MM_ANONPAGES);
goto discard;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 671e47e..7f520c9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -805,8 +805,7 @@ static enum page_references page_check_references(struct page *page,
return PAGEREF_KEEP;
}
- if (PageAnon(page) && !pte_dirty && !PageSwapCache(page) &&
- !PageDirty(page))
+ if (PageAnon(page) && !pte_dirty && !PageSwapCache(page))
*freeable = true;
/* Reclaim if clean, defer dirty pages to writeback */
--
1.9.3
--
Kind regards,
Minchan Kim
^ permalink raw reply related [flat|nested] 8+ messages in thread
* RE: [RFC V3] mm: change mm_advise_free to clear page dirty
2015-03-03 3:25 [RFC V3] mm: change mm_advise_free to clear page dirty Minchan Kim
@ 2015-03-03 3:59 ` Wang, Yalin
2015-03-03 4:14 ` Minchan Kim
2015-03-05 15:35 ` Michal Hocko
1 sibling, 1 reply; 8+ messages in thread
From: Wang, Yalin @ 2015-03-03 3:59 UTC (permalink / raw)
To: 'Minchan Kim'
Cc: 'Michal Hocko', 'Andrew Morton',
'linux-kernel@vger.kernel.org',
'linux-mm@kvack.org', 'Rik van Riel',
'Johannes Weiner', 'Mel Gorman',
'Shaohua Li',
Hugh Dickins, Cyrill Gorcunov
> -----Original Message-----
> From: Minchan Kim [mailto:minchan.kim@gmail.com] On Behalf Of Minchan Kim
> Sent: Tuesday, March 03, 2015 11:26 AM
> To: Wang, Yalin
> Cc: 'Michal Hocko'; 'Andrew Morton'; 'linux-kernel@vger.kernel.org';
> 'linux-mm@kvack.org'; 'Rik van Riel'; 'Johannes Weiner'; 'Mel Gorman';
> 'Shaohua Li'; Hugh Dickins; Cyrill Gorcunov
> Subject: Re: [RFC V3] mm: change mm_advise_free to clear page dirty
>
> Could you separte this patch in this patchset thread?
> It's tackling differnt problem.
>
> As well, I had a question to previous thread about why shared page
> has a problem now but you didn't answer and send a new patchset.
> It makes reviewers/maintainer time waste/confuse. Please, don't
> hurry to send a code. Before that, resolve reviewers's comments.
>
> On Tue, Mar 03, 2015 at 10:06:40AM +0800, Wang, Yalin wrote:
> > This patch add ClearPageDirty() to clear AnonPage dirty flag,
> > if not clear page dirty for this anon page, the page will never be
> > treated as freeable. We also make sure the shared AnonPage is not
> > freeable, we implement it by dirty all copyed AnonPage pte,
> > so that make sure the Anonpage will not become freeable, unless
> > all process which shared this page call madvise_free syscall.
>
> Please, spend more time to make description clear. I really doubt
> who understand this description without code inspection. :(
> Of course, I'm not a person to write description clear like native
> , either but just I'm sure I spend a more time to write description
> rather than coding, at least. :)
>
I see, I will send another mail for file private map pages.
Sorry for my English expressions.
I think your solution is ok,
Your patch will make sure the anonpage pte will always be dirty.
I add some comments for your patch:
> ---
> mm/madvise.c | 1 -
> mm/memory.c | 9 +++++++--
> mm/rmap.c | 2 +-
> mm/vmscan.c | 3 +--
> 4 files changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 6d0fcb8..d64200e 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -309,7 +309,6 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned
> long addr,
> continue;
> }
>
> - ClearPageDirty(page);
> unlock_page(page);
> }
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 8ae52c9..2f45e77 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2460,9 +2460,14 @@ static int do_swap_page(struct mm_struct *mm, struct
> vm_area_struct *vma,
>
> inc_mm_counter_fast(mm, MM_ANONPAGES);
> dec_mm_counter_fast(mm, MM_SWAPENTS);
> - pte = mk_pte(page, vma->vm_page_prot);
> +
> + /*
> + * Every page swapped-out was pte_dirty so we makes pte dirty again.
> + * MADV_FREE relys on it.
> + */
> + pte = mk_pte(pte_mkdirty(page), vma->vm_page_prot);
pte_mkdirty() usage seems wrong here.
> if ((flags & FAULT_FLAG_WRITE) && reuse_swap_page(page)) {
> - pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> + pte = maybe_mkwrite(pte, vma);
> flags &= ~FAULT_FLAG_WRITE;
> ret |= VM_FAULT_WRITE;
> exclusive = 1;
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 47b3ba8..34c1d66 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1268,7 +1268,7 @@ static int try_to_unmap_one(struct page *page, struct
> vm_area_struct *vma,
>
> if (flags & TTU_FREE) {
> VM_BUG_ON_PAGE(PageSwapCache(page), page);
> - if (!dirty && !PageDirty(page)) {
> + if (!dirty) {
> /* It's a freeable page by MADV_FREE */
> dec_mm_counter(mm, MM_ANONPAGES);
> goto discard;
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 671e47e..7f520c9 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -805,8 +805,7 @@ static enum page_references
> page_check_references(struct page *page,
> return PAGEREF_KEEP;
> }
>
> - if (PageAnon(page) && !pte_dirty && !PageSwapCache(page) &&
> - !PageDirty(page))
> + if (PageAnon(page) && !pte_dirty && !PageSwapCache(page))
> *freeable = true;
>
> /* Reclaim if clean, defer dirty pages to writeback */
> --
> 1.9.3
Could we remove SetPageDirty(page); in try_to_free_swap() function based on this patch?
Because your patch will make sure the pte is always dirty,
We don't need setpagedirty(),
The try_to_unmap() path will re-dirty the page during reclaim path,
Isn't it?
Thanks
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC V3] mm: change mm_advise_free to clear page dirty
2015-03-03 3:59 ` Wang, Yalin
@ 2015-03-03 4:14 ` Minchan Kim
2015-03-03 6:46 ` Wang, Yalin
0 siblings, 1 reply; 8+ messages in thread
From: Minchan Kim @ 2015-03-03 4:14 UTC (permalink / raw)
To: Wang, Yalin
Cc: 'Michal Hocko', 'Andrew Morton',
'linux-kernel@vger.kernel.org',
'linux-mm@kvack.org', 'Rik van Riel',
'Johannes Weiner', 'Mel Gorman',
'Shaohua Li',
Hugh Dickins, Cyrill Gorcunov
On Tue, Mar 03, 2015 at 11:59:17AM +0800, Wang, Yalin wrote:
> > -----Original Message-----
> > From: Minchan Kim [mailto:minchan.kim@gmail.com] On Behalf Of Minchan Kim
> > Sent: Tuesday, March 03, 2015 11:26 AM
> > To: Wang, Yalin
> > Cc: 'Michal Hocko'; 'Andrew Morton'; 'linux-kernel@vger.kernel.org';
> > 'linux-mm@kvack.org'; 'Rik van Riel'; 'Johannes Weiner'; 'Mel Gorman';
> > 'Shaohua Li'; Hugh Dickins; Cyrill Gorcunov
> > Subject: Re: [RFC V3] mm: change mm_advise_free to clear page dirty
> >
> > Could you separte this patch in this patchset thread?
> > It's tackling differnt problem.
> >
> > As well, I had a question to previous thread about why shared page
> > has a problem now but you didn't answer and send a new patchset.
> > It makes reviewers/maintainer time waste/confuse. Please, don't
> > hurry to send a code. Before that, resolve reviewers's comments.
> >
> > On Tue, Mar 03, 2015 at 10:06:40AM +0800, Wang, Yalin wrote:
> > > This patch add ClearPageDirty() to clear AnonPage dirty flag,
> > > if not clear page dirty for this anon page, the page will never be
> > > treated as freeable. We also make sure the shared AnonPage is not
> > > freeable, we implement it by dirty all copyed AnonPage pte,
> > > so that make sure the Anonpage will not become freeable, unless
> > > all process which shared this page call madvise_free syscall.
> >
> > Please, spend more time to make description clear. I really doubt
> > who understand this description without code inspection. :(
> > Of course, I'm not a person to write description clear like native
> > , either but just I'm sure I spend a more time to write description
> > rather than coding, at least. :)
> >
> I see, I will send another mail for file private map pages.
> Sorry for my English expressions.
> I think your solution is ok,
> Your patch will make sure the anonpage pte will always be dirty.
> I add some comments for your patch:
>
> > ---
> > mm/madvise.c | 1 -
> > mm/memory.c | 9 +++++++--
> > mm/rmap.c | 2 +-
> > mm/vmscan.c | 3 +--
> > 4 files changed, 9 insertions(+), 6 deletions(-)
> >
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index 6d0fcb8..d64200e 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -309,7 +309,6 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned
> > long addr,
> > continue;
> > }
> >
> > - ClearPageDirty(page);
> > unlock_page(page);
> > }
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 8ae52c9..2f45e77 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -2460,9 +2460,14 @@ static int do_swap_page(struct mm_struct *mm, struct
> > vm_area_struct *vma,
> >
> > inc_mm_counter_fast(mm, MM_ANONPAGES);
> > dec_mm_counter_fast(mm, MM_SWAPENTS);
> > - pte = mk_pte(page, vma->vm_page_prot);
> > +
> > + /*
> > + * Every page swapped-out was pte_dirty so we makes pte dirty again.
> > + * MADV_FREE relys on it.
> > + */
> > + pte = mk_pte(pte_mkdirty(page), vma->vm_page_prot);
> pte_mkdirty() usage seems wrong here.
Argh, it reveals I didn't test even build. My shame.
But RFC tag might mitigate my shame. :)
I will fix it if I send a formal version.
Thanks for the review.
>
> > if ((flags & FAULT_FLAG_WRITE) && reuse_swap_page(page)) {
> > - pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> > + pte = maybe_mkwrite(pte, vma);
> > flags &= ~FAULT_FLAG_WRITE;
> > ret |= VM_FAULT_WRITE;
> > exclusive = 1;
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index 47b3ba8..34c1d66 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1268,7 +1268,7 @@ static int try_to_unmap_one(struct page *page, struct
> > vm_area_struct *vma,
> >
> > if (flags & TTU_FREE) {
> > VM_BUG_ON_PAGE(PageSwapCache(page), page);
> > - if (!dirty && !PageDirty(page)) {
> > + if (!dirty) {
> > /* It's a freeable page by MADV_FREE */
> > dec_mm_counter(mm, MM_ANONPAGES);
> > goto discard;
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 671e47e..7f520c9 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -805,8 +805,7 @@ static enum page_references
> > page_check_references(struct page *page,
> > return PAGEREF_KEEP;
> > }
> >
> > - if (PageAnon(page) && !pte_dirty && !PageSwapCache(page) &&
> > - !PageDirty(page))
> > + if (PageAnon(page) && !pte_dirty && !PageSwapCache(page))
> > *freeable = true;
> >
> > /* Reclaim if clean, defer dirty pages to writeback */
> > --
> > 1.9.3
> Could we remove SetPageDirty(page); in try_to_free_swap() function based on this patch?
> Because your patch will make sure the pte is always dirty,
> We don't need setpagedirty(),
> The try_to_unmap() path will re-dirty the page during reclaim path,
> Isn't it?
I dont't know what side-effect we will have if we removes SetPageDirty.
It might regress on tmpfs which would page without pte.
I don't want to have such risk in this patch.
If you want it, you could suggest it separately if this patch lands.
--
Kind regards,
Minchan Kim
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [RFC V3] mm: change mm_advise_free to clear page dirty
2015-03-03 4:14 ` Minchan Kim
@ 2015-03-03 6:46 ` Wang, Yalin
2015-03-03 13:40 ` Minchan Kim
0 siblings, 1 reply; 8+ messages in thread
From: Wang, Yalin @ 2015-03-03 6:46 UTC (permalink / raw)
To: 'Minchan Kim'
Cc: 'Michal Hocko', 'Andrew Morton',
'linux-kernel@vger.kernel.org',
'linux-mm@kvack.org', 'Rik van Riel',
'Johannes Weiner', 'Mel Gorman',
'Shaohua Li',
Hugh Dickins, Cyrill Gorcunov
> -----Original Message-----
> From: Minchan Kim [mailto:minchan.kim@gmail.com] On Behalf Of Minchan Kim
> Sent: Tuesday, March 03, 2015 12:15 PM
> To: Wang, Yalin
> Cc: 'Michal Hocko'; 'Andrew Morton'; 'linux-kernel@vger.kernel.org';
> 'linux-mm@kvack.org'; 'Rik van Riel'; 'Johannes Weiner'; 'Mel Gorman';
> 'Shaohua Li'; Hugh Dickins; Cyrill Gorcunov
> Subject: Re: [RFC V3] mm: change mm_advise_free to clear page dirty
>
> On Tue, Mar 03, 2015 at 11:59:17AM +0800, Wang, Yalin wrote:
> > > -----Original Message-----
> > > From: Minchan Kim [mailto:minchan.kim@gmail.com] On Behalf Of Minchan
> Kim
> > > Sent: Tuesday, March 03, 2015 11:26 AM
> > > To: Wang, Yalin
> > > Cc: 'Michal Hocko'; 'Andrew Morton'; 'linux-kernel@vger.kernel.org';
> > > 'linux-mm@kvack.org'; 'Rik van Riel'; 'Johannes Weiner'; 'Mel Gorman';
> > > 'Shaohua Li'; Hugh Dickins; Cyrill Gorcunov
> > > Subject: Re: [RFC V3] mm: change mm_advise_free to clear page dirty
> > >
> > > Could you separte this patch in this patchset thread?
> > > It's tackling differnt problem.
> > >
> > > As well, I had a question to previous thread about why shared page
> > > has a problem now but you didn't answer and send a new patchset.
> > > It makes reviewers/maintainer time waste/confuse. Please, don't
> > > hurry to send a code. Before that, resolve reviewers's comments.
> > >
> > > On Tue, Mar 03, 2015 at 10:06:40AM +0800, Wang, Yalin wrote:
> > > > This patch add ClearPageDirty() to clear AnonPage dirty flag,
> > > > if not clear page dirty for this anon page, the page will never be
> > > > treated as freeable. We also make sure the shared AnonPage is not
> > > > freeable, we implement it by dirty all copyed AnonPage pte,
> > > > so that make sure the Anonpage will not become freeable, unless
> > > > all process which shared this page call madvise_free syscall.
> > >
> > > Please, spend more time to make description clear. I really doubt
> > > who understand this description without code inspection. :(
> > > Of course, I'm not a person to write description clear like native
> > > , either but just I'm sure I spend a more time to write description
> > > rather than coding, at least. :)
> > >
> > I see, I will send another mail for file private map pages.
> > Sorry for my English expressions.
> > I think your solution is ok,
> > Your patch will make sure the anonpage pte will always be dirty.
> > I add some comments for your patch:
> >
> > > ---
> > > mm/madvise.c | 1 -
> > > mm/memory.c | 9 +++++++--
> > > mm/rmap.c | 2 +-
> > > mm/vmscan.c | 3 +--
> > > 4 files changed, 9 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/mm/madvise.c b/mm/madvise.c
> > > index 6d0fcb8..d64200e 100644
> > > --- a/mm/madvise.c
> > > +++ b/mm/madvise.c
> > > @@ -309,7 +309,6 @@ static int madvise_free_pte_range(pmd_t *pmd,
> unsigned
> > > long addr,
> > > continue;
> > > }
> > >
> > > - ClearPageDirty(page);
> > > unlock_page(page);
> > > }
> > >
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 8ae52c9..2f45e77 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -2460,9 +2460,14 @@ static int do_swap_page(struct mm_struct *mm,
> struct
> > > vm_area_struct *vma,
> > >
> > > inc_mm_counter_fast(mm, MM_ANONPAGES);
> > > dec_mm_counter_fast(mm, MM_SWAPENTS);
> > > - pte = mk_pte(page, vma->vm_page_prot);
> > > +
> > > + /*
> > > + * Every page swapped-out was pte_dirty so we makes pte dirty again.
> > > + * MADV_FREE relys on it.
> > > + */
> > > + pte = mk_pte(pte_mkdirty(page), vma->vm_page_prot);
> > pte_mkdirty() usage seems wrong here.
>
> Argh, it reveals I didn't test even build. My shame.
> But RFC tag might mitigate my shame. :)
> I will fix it if I send a formal version.
> Thanks for the review.
>
> >
> > > if ((flags & FAULT_FLAG_WRITE) && reuse_swap_page(page)) {
> > > - pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> > > + pte = maybe_mkwrite(pte, vma);
> > > flags &= ~FAULT_FLAG_WRITE;
> > > ret |= VM_FAULT_WRITE;
> > > exclusive = 1;
> > > diff --git a/mm/rmap.c b/mm/rmap.c
> > > index 47b3ba8..34c1d66 100644
> > > --- a/mm/rmap.c
> > > +++ b/mm/rmap.c
> > > @@ -1268,7 +1268,7 @@ static int try_to_unmap_one(struct page *page,
> struct
> > > vm_area_struct *vma,
> > >
> > > if (flags & TTU_FREE) {
> > > VM_BUG_ON_PAGE(PageSwapCache(page), page);
> > > - if (!dirty && !PageDirty(page)) {
> > > + if (!dirty) {
> > > /* It's a freeable page by MADV_FREE */
> > > dec_mm_counter(mm, MM_ANONPAGES);
> > > goto discard;
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index 671e47e..7f520c9 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -805,8 +805,7 @@ static enum page_references
> > > page_check_references(struct page *page,
> > > return PAGEREF_KEEP;
> > > }
> > >
> > > - if (PageAnon(page) && !pte_dirty && !PageSwapCache(page) &&
> > > - !PageDirty(page))
> > > + if (PageAnon(page) && !pte_dirty && !PageSwapCache(page))
> > > *freeable = true;
> > >
> > > /* Reclaim if clean, defer dirty pages to writeback */
> > > --
> > > 1.9.3
> > Could we remove SetPageDirty(page); in try_to_free_swap() function based
> on this patch?
> > Because your patch will make sure the pte is always dirty,
> > We don't need setpagedirty(),
> > The try_to_unmap() path will re-dirty the page during reclaim path,
> > Isn't it?
>
> I dont't know what side-effect we will have if we removes SetPageDirty.
> It might regress on tmpfs which would page without pte.
> I don't want to have such risk in this patch.
> If you want it, you could suggest it separately if this patch lands.
>
Ok, Could you send out your change as a normal patch for more related maintainers to review /comment it?
Thanks
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC V3] mm: change mm_advise_free to clear page dirty
2015-03-03 6:46 ` Wang, Yalin
@ 2015-03-03 13:40 ` Minchan Kim
0 siblings, 0 replies; 8+ messages in thread
From: Minchan Kim @ 2015-03-03 13:40 UTC (permalink / raw)
To: Wang, Yalin
Cc: 'Michal Hocko', 'Andrew Morton',
'linux-kernel@vger.kernel.org',
'linux-mm@kvack.org', 'Rik van Riel',
'Johannes Weiner', 'Mel Gorman',
'Shaohua Li',
Hugh Dickins, Cyrill Gorcunov
On Tue, Mar 03, 2015 at 02:46:40PM +0800, Wang, Yalin wrote:
> > -----Original Message-----
> > From: Minchan Kim [mailto:minchan.kim@gmail.com] On Behalf Of Minchan Kim
> > Sent: Tuesday, March 03, 2015 12:15 PM
> > To: Wang, Yalin
> > Cc: 'Michal Hocko'; 'Andrew Morton'; 'linux-kernel@vger.kernel.org';
> > 'linux-mm@kvack.org'; 'Rik van Riel'; 'Johannes Weiner'; 'Mel Gorman';
> > 'Shaohua Li'; Hugh Dickins; Cyrill Gorcunov
> > Subject: Re: [RFC V3] mm: change mm_advise_free to clear page dirty
> >
> > On Tue, Mar 03, 2015 at 11:59:17AM +0800, Wang, Yalin wrote:
> > > > -----Original Message-----
> > > > From: Minchan Kim [mailto:minchan.kim@gmail.com] On Behalf Of Minchan
> > Kim
> > > > Sent: Tuesday, March 03, 2015 11:26 AM
> > > > To: Wang, Yalin
> > > > Cc: 'Michal Hocko'; 'Andrew Morton'; 'linux-kernel@vger.kernel.org';
> > > > 'linux-mm@kvack.org'; 'Rik van Riel'; 'Johannes Weiner'; 'Mel Gorman';
> > > > 'Shaohua Li'; Hugh Dickins; Cyrill Gorcunov
> > > > Subject: Re: [RFC V3] mm: change mm_advise_free to clear page dirty
> > > >
> > > > Could you separte this patch in this patchset thread?
> > > > It's tackling differnt problem.
> > > >
> > > > As well, I had a question to previous thread about why shared page
> > > > has a problem now but you didn't answer and send a new patchset.
> > > > It makes reviewers/maintainer time waste/confuse. Please, don't
> > > > hurry to send a code. Before that, resolve reviewers's comments.
> > > >
> > > > On Tue, Mar 03, 2015 at 10:06:40AM +0800, Wang, Yalin wrote:
> > > > > This patch add ClearPageDirty() to clear AnonPage dirty flag,
> > > > > if not clear page dirty for this anon page, the page will never be
> > > > > treated as freeable. We also make sure the shared AnonPage is not
> > > > > freeable, we implement it by dirty all copyed AnonPage pte,
> > > > > so that make sure the Anonpage will not become freeable, unless
> > > > > all process which shared this page call madvise_free syscall.
> > > >
> > > > Please, spend more time to make description clear. I really doubt
> > > > who understand this description without code inspection. :(
> > > > Of course, I'm not a person to write description clear like native
> > > > , either but just I'm sure I spend a more time to write description
> > > > rather than coding, at least. :)
> > > >
> > > I see, I will send another mail for file private map pages.
> > > Sorry for my English expressions.
> > > I think your solution is ok,
> > > Your patch will make sure the anonpage pte will always be dirty.
> > > I add some comments for your patch:
> > >
> > > > ---
> > > > mm/madvise.c | 1 -
> > > > mm/memory.c | 9 +++++++--
> > > > mm/rmap.c | 2 +-
> > > > mm/vmscan.c | 3 +--
> > > > 4 files changed, 9 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/mm/madvise.c b/mm/madvise.c
> > > > index 6d0fcb8..d64200e 100644
> > > > --- a/mm/madvise.c
> > > > +++ b/mm/madvise.c
> > > > @@ -309,7 +309,6 @@ static int madvise_free_pte_range(pmd_t *pmd,
> > unsigned
> > > > long addr,
> > > > continue;
> > > > }
> > > >
> > > > - ClearPageDirty(page);
> > > > unlock_page(page);
> > > > }
> > > >
> > > > diff --git a/mm/memory.c b/mm/memory.c
> > > > index 8ae52c9..2f45e77 100644
> > > > --- a/mm/memory.c
> > > > +++ b/mm/memory.c
> > > > @@ -2460,9 +2460,14 @@ static int do_swap_page(struct mm_struct *mm,
> > struct
> > > > vm_area_struct *vma,
> > > >
> > > > inc_mm_counter_fast(mm, MM_ANONPAGES);
> > > > dec_mm_counter_fast(mm, MM_SWAPENTS);
> > > > - pte = mk_pte(page, vma->vm_page_prot);
> > > > +
> > > > + /*
> > > > + * Every page swapped-out was pte_dirty so we makes pte dirty again.
> > > > + * MADV_FREE relys on it.
> > > > + */
> > > > + pte = mk_pte(pte_mkdirty(page), vma->vm_page_prot);
> > > pte_mkdirty() usage seems wrong here.
> >
> > Argh, it reveals I didn't test even build. My shame.
> > But RFC tag might mitigate my shame. :)
> > I will fix it if I send a formal version.
> > Thanks for the review.
> >
> > >
> > > > if ((flags & FAULT_FLAG_WRITE) && reuse_swap_page(page)) {
> > > > - pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> > > > + pte = maybe_mkwrite(pte, vma);
> > > > flags &= ~FAULT_FLAG_WRITE;
> > > > ret |= VM_FAULT_WRITE;
> > > > exclusive = 1;
> > > > diff --git a/mm/rmap.c b/mm/rmap.c
> > > > index 47b3ba8..34c1d66 100644
> > > > --- a/mm/rmap.c
> > > > +++ b/mm/rmap.c
> > > > @@ -1268,7 +1268,7 @@ static int try_to_unmap_one(struct page *page,
> > struct
> > > > vm_area_struct *vma,
> > > >
> > > > if (flags & TTU_FREE) {
> > > > VM_BUG_ON_PAGE(PageSwapCache(page), page);
> > > > - if (!dirty && !PageDirty(page)) {
> > > > + if (!dirty) {
> > > > /* It's a freeable page by MADV_FREE */
> > > > dec_mm_counter(mm, MM_ANONPAGES);
> > > > goto discard;
> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > > index 671e47e..7f520c9 100644
> > > > --- a/mm/vmscan.c
> > > > +++ b/mm/vmscan.c
> > > > @@ -805,8 +805,7 @@ static enum page_references
> > > > page_check_references(struct page *page,
> > > > return PAGEREF_KEEP;
> > > > }
> > > >
> > > > - if (PageAnon(page) && !pte_dirty && !PageSwapCache(page) &&
> > > > - !PageDirty(page))
> > > > + if (PageAnon(page) && !pte_dirty && !PageSwapCache(page))
> > > > *freeable = true;
> > > >
> > > > /* Reclaim if clean, defer dirty pages to writeback */
> > > > --
> > > > 1.9.3
> > > Could we remove SetPageDirty(page); in try_to_free_swap() function based
> > on this patch?
> > > Because your patch will make sure the pte is always dirty,
> > > We don't need setpagedirty(),
> > > The try_to_unmap() path will re-dirty the page during reclaim path,
> > > Isn't it?
> >
> > I dont't know what side-effect we will have if we removes SetPageDirty.
> > It might regress on tmpfs which would page without pte.
> > I don't want to have such risk in this patch.
> > If you want it, you could suggest it separately if this patch lands.
> >
> Ok, Could you send out your change as a normal patch for more related maintainers to review /comment it?
NP but let's wait a few days to see if we have a luck which they grab a time
slot to review. :)
Thanks.
--
Kind regards,
Minchan Kim
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC V3] mm: change mm_advise_free to clear page dirty
2015-03-03 3:25 [RFC V3] mm: change mm_advise_free to clear page dirty Minchan Kim
2015-03-03 3:59 ` Wang, Yalin
@ 2015-03-05 15:35 ` Michal Hocko
2015-03-09 0:57 ` Minchan Kim
1 sibling, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2015-03-05 15:35 UTC (permalink / raw)
To: Minchan Kim
Cc: Wang, Yalin, 'Andrew Morton',
'linux-kernel@vger.kernel.org',
'linux-mm@kvack.org', 'Rik van Riel',
'Johannes Weiner', 'Mel Gorman',
'Shaohua Li',
Hugh Dickins, Cyrill Gorcunov
On Tue 03-03-15 12:25:51, Minchan Kim wrote:
[...]
> From 30c6d5b35a3dc7e451041183ce5efd6a6c42bf88 Mon Sep 17 00:00:00 2001
> From: Minchan Kim <minchan@kernel.org>
> Date: Tue, 3 Mar 2015 10:06:59 +0900
> Subject: [RFC] mm: make every pte dirty on do_swap_page
Hi Minchan, could you resend this patch separately. I am afraid that
this one got so convoluted with originally unrelated issues that
people might miss it.
Thanks!
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC V3] mm: change mm_advise_free to clear page dirty
2015-03-05 15:35 ` Michal Hocko
@ 2015-03-09 0:57 ` Minchan Kim
0 siblings, 0 replies; 8+ messages in thread
From: Minchan Kim @ 2015-03-09 0:57 UTC (permalink / raw)
To: Michal Hocko
Cc: Wang, Yalin, 'Andrew Morton',
'linux-kernel@vger.kernel.org',
'linux-mm@kvack.org', 'Rik van Riel',
'Johannes Weiner', 'Mel Gorman',
'Shaohua Li',
Hugh Dickins, Cyrill Gorcunov
Hello Michal,
On Thu, Mar 05, 2015 at 04:35:05PM +0100, Michal Hocko wrote:
> On Tue 03-03-15 12:25:51, Minchan Kim wrote:
> [...]
> > From 30c6d5b35a3dc7e451041183ce5efd6a6c42bf88 Mon Sep 17 00:00:00 2001
> > From: Minchan Kim <minchan@kernel.org>
> > Date: Tue, 3 Mar 2015 10:06:59 +0900
> > Subject: [RFC] mm: make every pte dirty on do_swap_page
>
> Hi Minchan, could you resend this patch separately. I am afraid that
> this one got so convoluted with originally unrelated issues that
> people might miss it.
>
> Thanks!
No problem. Thanks for the review.
I will resend it this week but I'm afraid everybody will be in LSF/MM
so they will be busy with hardwork in there. :)
> --
> Michal Hocko
> SUSE Labs
--
Kind regards,
Minchan Kim
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH RFC 1/4] mm: throttle MADV_FREE
@ 2015-02-24 8:18 Minchan Kim
2015-02-24 15:43 ` Michal Hocko
0 siblings, 1 reply; 8+ messages in thread
From: Minchan Kim @ 2015-02-24 8:18 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, Rik van Riel, Michal Hocko,
Johannes Weiner, Mel Gorman, Shaohua Li, Yalin.Wang, Minchan Kim
Recently, Shaohua reported that MADV_FREE is much slower than
MADV_DONTNEED in his MADV_FREE bomb test. The reason is many of
applications went to stall with direct reclaim since kswapd's
reclaim speed isn't fast than applications's allocation speed
so that it causes lots of stall and lock contention.
This patch throttles MADV_FREEing so it works only if there
are enough pages in the system which will not trigger backgroud/
direct reclaim. Otherwise, MADV_FREE falls back to MADV_DONTNEED
because there is no point to delay freeing if we know system
is under memory pressure.
When I test the patch on my 3G machine + 12 CPU + 8G swap,
test: 12 processes
loop = 5;
mmap(512M);
while (loop--) {
memset(512M);
madvise(MADV_FREE or MADV_DONTNEED);
}
1) dontneed: 6.78user 234.09system 0:48.89elapsed
2) madvfree: 6.03user 401.17system 1:30.67elapsed
3) madvfree + this ptach: 5.68user 113.42system 0:36.52elapsed
It's clearly win.
Reported-by: Shaohua Li <shli@kernel.org>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
mm/madvise.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/mm/madvise.c b/mm/madvise.c
index 6d0fcb8921c2..81bb26ecf064 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -523,8 +523,17 @@ madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev,
* XXX: In this implementation, MADV_FREE works like
* MADV_DONTNEED on swapless system or full swap.
*/
- if (get_nr_swap_pages() > 0)
- return madvise_free(vma, prev, start, end);
+ if (get_nr_swap_pages() > 0) {
+ unsigned long threshold;
+ /*
+ * If we have trobule with memory pressure(ie,
+ * under high watermark), free pages instantly.
+ */
+ threshold = min_free_kbytes >> (PAGE_SHIFT - 10);
+ threshold = threshold + (threshold >> 1);
+ if (nr_free_pages() > threshold)
+ return madvise_free(vma, prev, start, end);
+ }
/* passthrough */
case MADV_DONTNEED:
return madvise_dontneed(vma, prev, start, end);
--
1.9.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH RFC 1/4] mm: throttle MADV_FREE
2015-02-24 8:18 [PATCH RFC 1/4] mm: throttle MADV_FREE Minchan Kim
@ 2015-02-24 15:43 ` Michal Hocko
2015-02-25 0:08 ` Minchan Kim
0 siblings, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2015-02-24 15:43 UTC (permalink / raw)
To: Minchan Kim
Cc: Andrew Morton, linux-kernel, linux-mm, Rik van Riel,
Johannes Weiner, Mel Gorman, Shaohua Li, Yalin.Wang
On Tue 24-02-15 17:18:14, Minchan Kim wrote:
> Recently, Shaohua reported that MADV_FREE is much slower than
> MADV_DONTNEED in his MADV_FREE bomb test. The reason is many of
> applications went to stall with direct reclaim since kswapd's
> reclaim speed isn't fast than applications's allocation speed
> so that it causes lots of stall and lock contention.
I am not sure I understand this correctly. So the issue is that there is
huge number of MADV_FREE on the LRU and they are not close to the tail
of the list so the reclaim has to do a lot of work before it starts
dropping them?
> This patch throttles MADV_FREEing so it works only if there
> are enough pages in the system which will not trigger backgroud/
> direct reclaim. Otherwise, MADV_FREE falls back to MADV_DONTNEED
> because there is no point to delay freeing if we know system
> is under memory pressure.
Hmm, this is still conforming to the documentation because the kernel is
free to free pages at its convenience. I am not sure this is a good
idea, though. Why some MADV_FREE calls should be treated differently?
Wouldn't that lead to hard to predict behavior? E.g. LIFO reused blocks
would work without long stalls most of the time - except when there is a
memory pressure.
Comparison to MADV_DONTNEED is not very fair IMHO because the scope of the
two calls is different.
> When I test the patch on my 3G machine + 12 CPU + 8G swap,
> test: 12 processes
>
> loop = 5;
> mmap(512M);
Who is eating the rest of the memory?
> while (loop--) {
> memset(512M);
> madvise(MADV_FREE or MADV_DONTNEED);
> }
>
> 1) dontneed: 6.78user 234.09system 0:48.89elapsed
> 2) madvfree: 6.03user 401.17system 1:30.67elapsed
> 3) madvfree + this ptach: 5.68user 113.42system 0:36.52elapsed
>
> It's clearly win.
>
> Reported-by: Shaohua Li <shli@kernel.org>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
I don't know. This looks like a hack with hard to predict consequences
which might trigger pathological corner cases.
> ---
> mm/madvise.c | 13 +++++++++++--
> 1 file changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 6d0fcb8921c2..81bb26ecf064 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -523,8 +523,17 @@ madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev,
> * XXX: In this implementation, MADV_FREE works like
> * MADV_DONTNEED on swapless system or full swap.
> */
> - if (get_nr_swap_pages() > 0)
> - return madvise_free(vma, prev, start, end);
> + if (get_nr_swap_pages() > 0) {
> + unsigned long threshold;
> + /*
> + * If we have trobule with memory pressure(ie,
> + * under high watermark), free pages instantly.
> + */
> + threshold = min_free_kbytes >> (PAGE_SHIFT - 10);
> + threshold = threshold + (threshold >> 1);
Why threshold += threshold >> 1 ?
> + if (nr_free_pages() > threshold)
> + return madvise_free(vma, prev, start, end);
> + }
> /* passthrough */
> case MADV_DONTNEED:
> return madvise_dontneed(vma, prev, start, end);
> --
> 1.9.1
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH RFC 1/4] mm: throttle MADV_FREE
2015-02-24 15:43 ` Michal Hocko
@ 2015-02-25 0:08 ` Minchan Kim
2015-02-27 3:37 ` [RFC] mm: change mm_advise_free to clear page dirty Wang, Yalin
0 siblings, 1 reply; 8+ messages in thread
From: Minchan Kim @ 2015-02-25 0:08 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, linux-kernel, linux-mm, Rik van Riel,
Johannes Weiner, Mel Gorman, Shaohua Li, Yalin.Wang
Hi Michal,
On Tue, Feb 24, 2015 at 04:43:18PM +0100, Michal Hocko wrote:
> On Tue 24-02-15 17:18:14, Minchan Kim wrote:
> > Recently, Shaohua reported that MADV_FREE is much slower than
> > MADV_DONTNEED in his MADV_FREE bomb test. The reason is many of
> > applications went to stall with direct reclaim since kswapd's
> > reclaim speed isn't fast than applications's allocation speed
> > so that it causes lots of stall and lock contention.
>
> I am not sure I understand this correctly. So the issue is that there is
> huge number of MADV_FREE on the LRU and they are not close to the tail
> of the list so the reclaim has to do a lot of work before it starts
> dropping them?
No, Shaohua already tested deactivating of hinted pages to head/tail
of inactive anon LRU and he said it didn't solve his problem.
I thought main culprit was scanning/rotating/throttling in
direct reclaim path.
>
> > This patch throttles MADV_FREEing so it works only if there
> > are enough pages in the system which will not trigger backgroud/
> > direct reclaim. Otherwise, MADV_FREE falls back to MADV_DONTNEED
> > because there is no point to delay freeing if we know system
> > is under memory pressure.
>
> Hmm, this is still conforming to the documentation because the kernel is
> free to free pages at its convenience. I am not sure this is a good
> idea, though. Why some MADV_FREE calls should be treated differently?
It's hint for VM to free pages so I think it's okay to free them instantly
sometime if it can save more important thing like system stall.
IOW, madvise is just hint, not a strict rule.
> Wouldn't that lead to hard to predict behavior? E.g. LIFO reused blocks
> would work without long stalls most of the time - except when there is a
> memory pressure.
True.
>
> Comparison to MADV_DONTNEED is not very fair IMHO because the scope of the
> two calls is different.
I agree it's not a apple to apple comparison.
Acutally, MADV_FREE moves the cost from hot path(ie, system call path)
to slow path(ie, reclaim context) so it would be slower if there are
much memory pressure continuously due to a lot overhead of freeing pages
in reclaim context. So, it would be good if kernel detects it nicely
and prevent the situation. This patch aims for that.
>
> > When I test the patch on my 3G machine + 12 CPU + 8G swap,
> > test: 12 processes
> >
> > loop = 5;
> > mmap(512M);
>
> Who is eating the rest of the memory?
As I wrote down, there are 12 processes with below test.
IOW, 512M * 12 = 6G but system RAM is just 3G.
>
> > while (loop--) {
> > memset(512M);
> > madvise(MADV_FREE or MADV_DONTNEED);
> > }
> >
> > 1) dontneed: 6.78user 234.09system 0:48.89elapsed
> > 2) madvfree: 6.03user 401.17system 1:30.67elapsed
> > 3) madvfree + this ptach: 5.68user 113.42system 0:36.52elapsed
> >
> > It's clearly win.
> >
> > Reported-by: Shaohua Li <shli@kernel.org>
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
>
> I don't know. This looks like a hack with hard to predict consequences
> which might trigger pathological corner cases.
Yeb, it might be. That's why I tagged RFC so hope other guys suggest
better idea.
>
> > ---
> > mm/madvise.c | 13 +++++++++++--
> > 1 file changed, 11 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index 6d0fcb8921c2..81bb26ecf064 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -523,8 +523,17 @@ madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev,
> > * XXX: In this implementation, MADV_FREE works like
> > * MADV_DONTNEED on swapless system or full swap.
> > */
> > - if (get_nr_swap_pages() > 0)
> > - return madvise_free(vma, prev, start, end);
> > + if (get_nr_swap_pages() > 0) {
> > + unsigned long threshold;
> > + /*
> > + * If we have trobule with memory pressure(ie,
> > + * under high watermark), free pages instantly.
> > + */
> > + threshold = min_free_kbytes >> (PAGE_SHIFT - 10);
> > + threshold = threshold + (threshold >> 1);
>
> Why threshold += threshold >> 1 ?
I wanted to trigger this logic if we have free pages under high watermark.
>
> > + if (nr_free_pages() > threshold)
> > + return madvise_free(vma, prev, start, end);
> > + }
> > /* passthrough */
> > case MADV_DONTNEED:
> > return madvise_dontneed(vma, prev, start, end);
> > --
> > 1.9.1
> >
> > --
> > To unsubscribe, send a message with 'unsubscribe linux-mm' in
> > the body to majordomo@kvack.org. For more info on Linux MM,
> > see: http://www.linux-mm.org/ .
> > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
> --
> Michal Hocko
> SUSE Labs
--
Kind regards,
Minchan Kim
^ permalink raw reply [flat|nested] 8+ messages in thread
* [RFC] mm: change mm_advise_free to clear page dirty
2015-02-25 0:08 ` Minchan Kim
@ 2015-02-27 3:37 ` Wang, Yalin
2015-02-27 21:02 ` Michal Hocko
0 siblings, 1 reply; 8+ messages in thread
From: Wang, Yalin @ 2015-02-27 3:37 UTC (permalink / raw)
To: 'Minchan Kim',
Michal Hocko, Andrew Morton, linux-kernel, linux-mm,
Rik van Riel, Johannes Weiner, Mel Gorman, Shaohua Li
This patch add ClearPageDirty() to clear AnonPage dirty flag,
the Anonpage mapcount must be 1, so that this page is only used by
the current process, not shared by other process like fork().
if not clear page dirty for this anon page, the page will never be
treated as freeable.
Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com>
---
mm/madvise.c | 15 +++++----------
1 file changed, 5 insertions(+), 10 deletions(-)
diff --git a/mm/madvise.c b/mm/madvise.c
index 6d0fcb8..257925a 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -297,22 +297,17 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
continue;
page = vm_normal_page(vma, addr, ptent);
- if (!page)
+ if (!page || !PageAnon(page) || !trylock_page(page))
continue;
if (PageSwapCache(page)) {
- if (!trylock_page(page))
+ if (!try_to_free_swap(page))
continue;
-
- if (!try_to_free_swap(page)) {
- unlock_page(page);
- continue;
- }
-
- ClearPageDirty(page);
- unlock_page(page);
}
+ if (page_mapcount(page) == 1)
+ ClearPageDirty(page);
+ unlock_page(page);
/*
* Some of architecture(ex, PPC) don't update TLB
* with set_pte_at and tlb_remove_tlb_entry so for
--
2.2.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [RFC] mm: change mm_advise_free to clear page dirty
2015-02-27 3:37 ` [RFC] mm: change mm_advise_free to clear page dirty Wang, Yalin
@ 2015-02-27 21:02 ` Michal Hocko
2015-02-28 2:11 ` Wang, Yalin
0 siblings, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2015-02-27 21:02 UTC (permalink / raw)
To: Wang, Yalin
Cc: 'Minchan Kim',
Andrew Morton, linux-kernel, linux-mm, Rik van Riel,
Johannes Weiner, Mel Gorman, Shaohua Li
On Fri 27-02-15 11:37:18, Wang, Yalin wrote:
> This patch add ClearPageDirty() to clear AnonPage dirty flag,
> the Anonpage mapcount must be 1, so that this page is only used by
> the current process, not shared by other process like fork().
> if not clear page dirty for this anon page, the page will never be
> treated as freeable.
Very well spotted! I haven't noticed that during the review.
> Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com>
> ---
> mm/madvise.c | 15 +++++----------
> 1 file changed, 5 insertions(+), 10 deletions(-)
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 6d0fcb8..257925a 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -297,22 +297,17 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> continue;
>
> page = vm_normal_page(vma, addr, ptent);
> - if (!page)
> + if (!page || !PageAnon(page) || !trylock_page(page))
> continue;
PageAnon check seems to be redundant because we are not allowing
MADV_FREE on any !anon private mappings AFAIR.
>
> if (PageSwapCache(page)) {
> - if (!trylock_page(page))
> + if (!try_to_free_swap(page))
> continue;
You need to unlock the page here.
> -
> - if (!try_to_free_swap(page)) {
> - unlock_page(page);
> - continue;
> - }
> -
> - ClearPageDirty(page);
> - unlock_page(page);
> }
>
> + if (page_mapcount(page) == 1)
> + ClearPageDirty(page);
Please add a comment about why we need to ClearPageDirty even
!PageSwapCache. Anon pages are usually not marked dirty AFAIR. The
reason seem to be racing try_to_free_swap which sets the page that way
(although I do not seem to remember why are we doing that in the first
place...)
> + unlock_page(page);
> /*
> * Some of architecture(ex, PPC) don't update TLB
> * with set_pte_at and tlb_remove_tlb_entry so for
> --
> 2.2.2
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [RFC] mm: change mm_advise_free to clear page dirty
2015-02-27 21:02 ` Michal Hocko
@ 2015-02-28 2:11 ` Wang, Yalin
2015-02-28 6:01 ` [RFC V2] " Wang, Yalin
0 siblings, 1 reply; 8+ messages in thread
From: Wang, Yalin @ 2015-02-28 2:11 UTC (permalink / raw)
To: 'Michal Hocko'
Cc: 'Minchan Kim',
Andrew Morton, linux-kernel, linux-mm, Rik van Riel,
Johannes Weiner, Mel Gorman, Shaohua Li
> -----Original Message-----
> From: Michal Hocko [mailto:mstsxfx@gmail.com] On Behalf Of Michal Hocko
> Sent: Saturday, February 28, 2015 5:03 AM
> To: Wang, Yalin
> Cc: 'Minchan Kim'; Andrew Morton; linux-kernel@vger.kernel.org; linux-
> mm@kvack.org; Rik van Riel; Johannes Weiner; Mel Gorman; Shaohua Li
> Subject: Re: [RFC] mm: change mm_advise_free to clear page dirty
>
> On Fri 27-02-15 11:37:18, Wang, Yalin wrote:
> > This patch add ClearPageDirty() to clear AnonPage dirty flag,
> > the Anonpage mapcount must be 1, so that this page is only used by
> > the current process, not shared by other process like fork().
> > if not clear page dirty for this anon page, the page will never be
> > treated as freeable.
>
> Very well spotted! I haven't noticed that during the review.
>
> > Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com>
> > ---
> > mm/madvise.c | 15 +++++----------
> > 1 file changed, 5 insertions(+), 10 deletions(-)
> >
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index 6d0fcb8..257925a 100644
> > --- a/mm/madvise.c
> > +++ b/mm/madvise.c
> > @@ -297,22 +297,17 @@ static int madvise_free_pte_range(pmd_t *pmd,
> unsigned long addr,
> > continue;
> >
> > page = vm_normal_page(vma, addr, ptent);
> > - if (!page)
> > + if (!page || !PageAnon(page) || !trylock_page(page))
> > continue;
>
> PageAnon check seems to be redundant because we are not allowing
> MADV_FREE on any !anon private mappings AFAIR.
I only see this check:
/* MADV_FREE works for only anon vma at the moment */
if (vma->vm_file)
return -EINVAL;
but for file private map, there are also AnonPage sometimes, do we need change
to like this:
if (vma->vm_flags & VM_SHARED)
return -EINVAL;
> >
> > if (PageSwapCache(page)) {
> > - if (!trylock_page(page))
> > + if (!try_to_free_swap(page))
> > continue;
>
> You need to unlock the page here.
Good spot.
> > -
> > - if (!try_to_free_swap(page)) {
> > - unlock_page(page);
> > - continue;
> > - }
> > -
> > - ClearPageDirty(page);
> > - unlock_page(page);
> > }
> >
> > + if (page_mapcount(page) == 1)
> > + ClearPageDirty(page);
>
> Please add a comment about why we need to ClearPageDirty even
> !PageSwapCache. Anon pages are usually not marked dirty AFAIR. The
> reason seem to be racing try_to_free_swap which sets the page that way
> (although I do not seem to remember why are we doing that in the first
> place...)
>
Use page_mapcount to judge if a page can be clear dirty flag seems
Not a very good solution, that is because we don't know how many
ptes are share this page, I am thinking if there is some good solution
For shared AnonPage.
^ permalink raw reply [flat|nested] 8+ messages in thread
* [RFC V2] mm: change mm_advise_free to clear page dirty
2015-02-28 2:11 ` Wang, Yalin
@ 2015-02-28 6:01 ` Wang, Yalin
2015-03-02 12:38 ` Michal Hocko
0 siblings, 1 reply; 8+ messages in thread
From: Wang, Yalin @ 2015-02-28 6:01 UTC (permalink / raw)
To: 'Michal Hocko'
Cc: 'Minchan Kim', 'Andrew Morton',
'linux-kernel@vger.kernel.org',
'linux-mm@kvack.org', 'Rik van Riel',
'Johannes Weiner', 'Mel Gorman',
'Shaohua Li'
This patch add ClearPageDirty() to clear AnonPage dirty flag,
if not clear page dirty for this anon page, the page will never be
treated as freeable. we also make sure the shared AnonPage is not
freeable, we implement it by dirty all copyed AnonPage pte,
so that make sure the Anonpage will not become freeable, unless
all process which shared this page call madvise_free syscall.
Another change is that we also handle file map page,
we just clear pte young bit for file map, this is useful,
it can make reclaim patch move file pages into inactive
lru list aggressively.
Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com>
---
mm/madvise.c | 26 +++++++++++++++-----------
mm/memory.c | 12 ++++++++++--
2 files changed, 25 insertions(+), 13 deletions(-)
diff --git a/mm/madvise.c b/mm/madvise.c
index 6d0fcb8..712756b 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -299,30 +299,38 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
page = vm_normal_page(vma, addr, ptent);
if (!page)
continue;
+ if (!PageAnon(page))
+ goto set_pte;
+ if (!trylock_page(page))
+ continue;
if (PageSwapCache(page)) {
- if (!trylock_page(page))
- continue;
-
if (!try_to_free_swap(page)) {
unlock_page(page);
continue;
}
-
- ClearPageDirty(page);
- unlock_page(page);
}
/*
+ * we clear page dirty flag for AnonPage, no matter if this
+ * page is in swapcahce or not, AnonPage not in swapcache also set
+ * dirty flag sometimes, this happened when an AnonPage is removed
+ * from swapcahce by try_to_free_swap()
+ */
+ ClearPageDirty(page);
+ unlock_page(page);
+ /*
* Some of architecture(ex, PPC) don't update TLB
* with set_pte_at and tlb_remove_tlb_entry so for
* the portability, remap the pte with old|clean
* after pte clearing.
*/
+set_pte:
ptent = ptep_get_and_clear_full(mm, addr, pte,
tlb->fullmm);
ptent = pte_mkold(ptent);
- ptent = pte_mkclean(ptent);
+ if (PageAnon(page))
+ ptent = pte_mkclean(ptent);
set_pte_at(mm, addr, pte, ptent);
tlb_remove_tlb_entry(tlb, pte, addr);
}
@@ -364,10 +372,6 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
if (vma->vm_flags & (VM_LOCKED|VM_HUGETLB|VM_PFNMAP))
return -EINVAL;
- /* MADV_FREE works for only anon vma at the moment */
- if (vma->vm_file)
- return -EINVAL;
-
start = max(vma->vm_start, start_addr);
if (start >= vma->vm_end)
return -EINVAL;
diff --git a/mm/memory.c b/mm/memory.c
index 8068893..3d949b3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -874,10 +874,18 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
if (page) {
get_page(page);
page_dup_rmap(page);
- if (PageAnon(page))
+ if (PageAnon(page)) {
+ /*
+ * we dirty the copyed pte for anon page,
+ * this is useful for madvise_free_pte_range(),
+ * this can prevent shared anon page freed by madvise_free
+ * syscall
+ */
+ pte = pte_mkdirty(pte);
rss[MM_ANONPAGES]++;
- else
+ } else {
rss[MM_FILEPAGES]++;
+ }
}
out_set_pte:
--
2.2.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [RFC V2] mm: change mm_advise_free to clear page dirty
2015-02-28 6:01 ` [RFC V2] " Wang, Yalin
@ 2015-03-02 12:38 ` Michal Hocko
2015-03-03 2:06 ` [RFC V3] " Wang, Yalin
0 siblings, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2015-03-02 12:38 UTC (permalink / raw)
To: Wang, Yalin
Cc: 'Minchan Kim', 'Andrew Morton',
'linux-kernel@vger.kernel.org',
'linux-mm@kvack.org', 'Rik van Riel',
'Johannes Weiner', 'Mel Gorman',
'Shaohua Li'
On Sat 28-02-15 14:01:46, Wang, Yalin wrote:
> This patch add ClearPageDirty() to clear AnonPage dirty flag,
> if not clear page dirty for this anon page, the page will never be
> treated as freeable. we also make sure the shared AnonPage is not
> freeable, we implement it by dirty all copyed AnonPage pte,
> so that make sure the Anonpage will not become freeable, unless
> all process which shared this page call madvise_free syscall.
I am not able to parse this text.
> Another change is that we also handle file map page,
> we just clear pte young bit for file map, this is useful,
> it can make reclaim patch move file pages into inactive
> lru list aggressively.
This doesn't belong to this patch. If file private mappings should allow
MADV_FREE is a separate topic and should be discussed independently.
>
> Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com>
> ---
> mm/madvise.c | 26 +++++++++++++++-----------
> mm/memory.c | 12 ++++++++++--
> 2 files changed, 25 insertions(+), 13 deletions(-)
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 6d0fcb8..712756b 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -299,30 +299,38 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> page = vm_normal_page(vma, addr, ptent);
> if (!page)
> continue;
> + if (!PageAnon(page))
> + goto set_pte;
> + if (!trylock_page(page))
> + continue;
>
> if (PageSwapCache(page)) {
> - if (!trylock_page(page))
> - continue;
> -
> if (!try_to_free_swap(page)) {
> unlock_page(page);
> continue;
> }
> -
> - ClearPageDirty(page);
> - unlock_page(page);
> }
>
> /*
> + * we clear page dirty flag for AnonPage, no matter if this
> + * page is in swapcahce or not, AnonPage not in swapcache also set
> + * dirty flag sometimes, this happened when an AnonPage is removed
> + * from swapcahce by try_to_free_swap()
> + */
> + ClearPageDirty(page);
> + unlock_page(page);
> + /*
> * Some of architecture(ex, PPC) don't update TLB
> * with set_pte_at and tlb_remove_tlb_entry so for
> * the portability, remap the pte with old|clean
> * after pte clearing.
> */
> +set_pte:
> ptent = ptep_get_and_clear_full(mm, addr, pte,
> tlb->fullmm);
> ptent = pte_mkold(ptent);
> - ptent = pte_mkclean(ptent);
> + if (PageAnon(page))
> + ptent = pte_mkclean(ptent);
> set_pte_at(mm, addr, pte, ptent);
> tlb_remove_tlb_entry(tlb, pte, addr);
> }
> @@ -364,10 +372,6 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
> if (vma->vm_flags & (VM_LOCKED|VM_HUGETLB|VM_PFNMAP))
> return -EINVAL;
>
> - /* MADV_FREE works for only anon vma at the moment */
> - if (vma->vm_file)
> - return -EINVAL;
> -
> start = max(vma->vm_start, start_addr);
> if (start >= vma->vm_end)
> return -EINVAL;
> diff --git a/mm/memory.c b/mm/memory.c
> index 8068893..3d949b3 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -874,10 +874,18 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> if (page) {
> get_page(page);
> page_dup_rmap(page);
> - if (PageAnon(page))
> + if (PageAnon(page)) {
> + /*
> + * we dirty the copyed pte for anon page,
> + * this is useful for madvise_free_pte_range(),
> + * this can prevent shared anon page freed by madvise_free
> + * syscall
> + */
> + pte = pte_mkdirty(pte);
> rss[MM_ANONPAGES]++;
> - else
> + } else {
> rss[MM_FILEPAGES]++;
> + }
> }
>
> out_set_pte:
> --
> 2.2.2
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 8+ messages in thread
* [RFC V3] mm: change mm_advise_free to clear page dirty
2015-03-02 12:38 ` Michal Hocko
@ 2015-03-03 2:06 ` Wang, Yalin
0 siblings, 0 replies; 8+ messages in thread
From: Wang, Yalin @ 2015-03-03 2:06 UTC (permalink / raw)
To: 'Michal Hocko', 'Minchan Kim',
'Andrew Morton', 'linux-kernel@vger.kernel.org',
'linux-mm@kvack.org', 'Rik van Riel',
'Johannes Weiner', 'Mel Gorman',
'Shaohua Li'
This patch add ClearPageDirty() to clear AnonPage dirty flag,
if not clear page dirty for this anon page, the page will never be
treated as freeable. We also make sure the shared AnonPage is not
freeable, we implement it by dirty all copyed AnonPage pte,
so that make sure the Anonpage will not become freeable, unless
all process which shared this page call madvise_free syscall.
Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com>
---
mm/madvise.c | 16 +++++++++-------
mm/memory.c | 12 ++++++++++--
2 files changed, 19 insertions(+), 9 deletions(-)
diff --git a/mm/madvise.c b/mm/madvise.c
index 6d0fcb8..b61070d 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -297,23 +297,25 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
continue;
page = vm_normal_page(vma, addr, ptent);
- if (!page)
+ if (!page || !trylock_page(page))
continue;
if (PageSwapCache(page)) {
- if (!trylock_page(page))
- continue;
-
if (!try_to_free_swap(page)) {
unlock_page(page);
continue;
}
-
- ClearPageDirty(page);
- unlock_page(page);
}
/*
+ * we clear page dirty flag for AnonPage, no matter if this
+ * page is in swapcahce or not, AnonPage not in swapcache also set
+ * dirty flag sometimes, this happened when a AnonPage is removed
+ * from swapcahce by try_to_free_swap()
+ */
+ ClearPageDirty(page);
+ unlock_page(page);
+ /*
* Some of architecture(ex, PPC) don't update TLB
* with set_pte_at and tlb_remove_tlb_entry so for
* the portability, remap the pte with old|clean
diff --git a/mm/memory.c b/mm/memory.c
index 8068893..3d949b3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -874,10 +874,18 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
if (page) {
get_page(page);
page_dup_rmap(page);
- if (PageAnon(page))
+ if (PageAnon(page)) {
+ /*
+ * we dirty the copyed pte for anon page,
+ * this is useful for madvise_free_pte_range(),
+ * this can prevent shared anon page freed by madvise_free
+ * syscall
+ */
+ pte = pte_mkdirty(pte);
rss[MM_ANONPAGES]++;
- else
+ } else {
rss[MM_FILEPAGES]++;
+ }
}
out_set_pte:
--
2.2.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2015-03-09 0:57 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-03 3:25 [RFC V3] mm: change mm_advise_free to clear page dirty Minchan Kim
2015-03-03 3:59 ` Wang, Yalin
2015-03-03 4:14 ` Minchan Kim
2015-03-03 6:46 ` Wang, Yalin
2015-03-03 13:40 ` Minchan Kim
2015-03-05 15:35 ` Michal Hocko
2015-03-09 0:57 ` Minchan Kim
-- strict thread matches above, loose matches on Subject: below --
2015-02-24 8:18 [PATCH RFC 1/4] mm: throttle MADV_FREE Minchan Kim
2015-02-24 15:43 ` Michal Hocko
2015-02-25 0:08 ` Minchan Kim
2015-02-27 3:37 ` [RFC] mm: change mm_advise_free to clear page dirty Wang, Yalin
2015-02-27 21:02 ` Michal Hocko
2015-02-28 2:11 ` Wang, Yalin
2015-02-28 6:01 ` [RFC V2] " Wang, Yalin
2015-03-02 12:38 ` Michal Hocko
2015-03-03 2:06 ` [RFC V3] " Wang, Yalin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).