* [PATCH] mm: hugetlb: bail out unmapping after serving reference page
@ 2012-02-22 12:35 Hillf Danton
2012-02-22 21:06 ` Andrew Morton
0 siblings, 1 reply; 6+ messages in thread
From: Hillf Danton @ 2012-02-22 12:35 UTC (permalink / raw)
To: Linux-MM
Cc: LKML, Michal Hocko, KAMEZAWA Hiroyuki, Hugh Dickins,
Andrew Morton, Hillf Danton
When unmapping given VM range, we could bail out if a reference page is
supplied and it is unmapped, which is a minor optimization.
Signed-off-by: Hillf Danton <dhillf@gmail.com>
---
--- a/mm/hugetlb.c Wed Feb 22 19:34:12 2012
+++ b/mm/hugetlb.c Wed Feb 22 19:50:26 2012
@@ -2280,6 +2280,9 @@ void __unmap_hugepage_range(struct vm_ar
if (pte_dirty(pte))
set_page_dirty(page);
list_add(&page->lru, &page_list);
+
+ if (page == ref_page)
+ break;
}
spin_unlock(&mm->page_table_lock);
flush_tlb_range(vma, start, end);
--
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm: hugetlb: bail out unmapping after serving reference page
2012-02-22 12:35 [PATCH] mm: hugetlb: bail out unmapping after serving reference page Hillf Danton
@ 2012-02-22 21:06 ` Andrew Morton
2012-02-23 13:05 ` Hillf Danton
0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2012-02-22 21:06 UTC (permalink / raw)
To: Hillf Danton
Cc: Linux-MM, LKML, Michal Hocko, KAMEZAWA Hiroyuki, Hugh Dickins
On Wed, 22 Feb 2012 20:35:34 +0800
Hillf Danton <dhillf@gmail.com> wrote:
> When unmapping given VM range, we could bail out if a reference page is
> supplied and it is unmapped, which is a minor optimization.
>
> Signed-off-by: Hillf Danton <dhillf@gmail.com>
> ---
>
> --- a/mm/hugetlb.c Wed Feb 22 19:34:12 2012
> +++ b/mm/hugetlb.c Wed Feb 22 19:50:26 2012
> @@ -2280,6 +2280,9 @@ void __unmap_hugepage_range(struct vm_ar
> if (pte_dirty(pte))
> set_page_dirty(page);
> list_add(&page->lru, &page_list);
> +
> + if (page == ref_page)
> + break;
> }
> spin_unlock(&mm->page_table_lock);
> flush_tlb_range(vma, start, end);
Perhaps add a little comment to this explaining what's going on?
It would be sufficient to do
if (ref_page)
break;
This is more efficient, and doesn't make people worry about whether
this value of `page' is the same as the one which
pte_page(huge_ptep_get()) earlier returned.
Why do we evaluate `page' twice inside that loop anyway? And why do we
check for huge_pte_none() twice? It looks all messed up.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm: hugetlb: bail out unmapping after serving reference page
2012-02-22 21:06 ` Andrew Morton
@ 2012-02-23 13:05 ` Hillf Danton
2012-02-23 20:12 ` Andrew Morton
0 siblings, 1 reply; 6+ messages in thread
From: Hillf Danton @ 2012-02-23 13:05 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux-MM, LKML, Michal Hocko, KAMEZAWA Hiroyuki, Hugh Dickins
On Thu, Feb 23, 2012 at 5:06 AM, Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> Perhaps add a little comment to this explaining what's going on?
>
>
> It would be sufficient to do
>
> if (ref_page)
> break;
>
> This is more efficient, and doesn't make people worry about whether
> this value of `page' is the same as the one which
> pte_page(huge_ptep_get()) earlier returned.
>
Hi Andrew
It is re-prepared,
===cut here===
From: Hillf Danton <dhillf@gmail.com>
Subject: [PATCH] mm: hugetlb: bail out unmapping after serving reference page
When unmapping given VM range, we could bail out if a reference page is
supplied and is unmapped, which is a minor optimization.
Signed-off-by: Hillf Danton <dhillf@gmail.com>
---
--- a/mm/hugetlb.c Wed Feb 22 19:34:12 2012
+++ b/mm/hugetlb.c Thu Feb 23 20:13:06 2012
@@ -2280,6 +2280,10 @@ void __unmap_hugepage_range(struct vm_ar
if (pte_dirty(pte))
set_page_dirty(page);
list_add(&page->lru, &page_list);
+
+ /* Bail out after unmapping reference page if supplied */
+ if (ref_page)
+ break;
}
spin_unlock(&mm->page_table_lock);
flush_tlb_range(vma, start, end);
--
> Why do we evaluate `page' twice inside that loop anyway? And why do we
> check for huge_pte_none() twice? It looks all messed up.
>
and a follow-up cleanup also attached.
Thanks
Hillf
===cut here===
From: Hillf Danton <dhillf@gmail.com>
Subject: [PATCH] mm: hugetlb: cleanup duplicated code in unmapping vm range
When unmapping given VM range, a couple of code duplicate, such as pte_page()
and huge_pte_none(), so a cleanup needed to compact them together.
Signed-off-by: Hillf Danton <dhillf@gmail.com>
---
--- a/mm/hugetlb.c Thu Feb 23 20:13:06 2012
+++ b/mm/hugetlb.c Thu Feb 23 20:30:16 2012
@@ -2245,16 +2245,23 @@ void __unmap_hugepage_range(struct vm_ar
if (huge_pmd_unshare(mm, &address, ptep))
continue;
+ pte = huge_ptep_get(ptep);
+ if (huge_pte_none(pte))
+ continue;
+
+ /*
+ * HWPoisoned hugepage is already unmapped and dropped reference
+ */
+ if (unlikely(is_hugetlb_entry_hwpoisoned(pte)))
+ continue;
+
+ page = pte_page(pte);
/*
* If a reference page is supplied, it is because a specific
* page is being unmapped, not a range. Ensure the page we
* are about to unmap is the actual page of interest.
*/
if (ref_page) {
- pte = huge_ptep_get(ptep);
- if (huge_pte_none(pte))
- continue;
- page = pte_page(pte);
if (page != ref_page)
continue;
@@ -2267,16 +2274,6 @@ void __unmap_hugepage_range(struct vm_ar
}
pte = huge_ptep_get_and_clear(mm, address, ptep);
- if (huge_pte_none(pte))
- continue;
-
- /*
- * HWPoisoned hugepage is already unmapped and dropped reference
- */
- if (unlikely(is_hugetlb_entry_hwpoisoned(pte)))
- continue;
-
- page = pte_page(pte);
if (pte_dirty(pte))
set_page_dirty(page);
list_add(&page->lru, &page_list);
--
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm: hugetlb: bail out unmapping after serving reference page
2012-02-23 13:05 ` Hillf Danton
@ 2012-02-23 20:12 ` Andrew Morton
2012-03-21 21:00 ` Andrew Morton
0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2012-02-23 20:12 UTC (permalink / raw)
To: Hillf Danton
Cc: Linux-MM, LKML, Michal Hocko, KAMEZAWA Hiroyuki, Hugh Dickins
On Thu, 23 Feb 2012 21:05:41 +0800
Hillf Danton <dhillf@gmail.com> wrote:
> and a follow-up cleanup also attached.
Please, never put more than one patches in an email - it is rather a
pain to manually unpick everything.
> When unmapping given VM range, a couple of code duplicate, such as pte_page()
> and huge_pte_none(), so a cleanup needed to compact them together.
>
> Signed-off-by: Hillf Danton <dhillf@gmail.com>
> ---
>
> --- a/mm/hugetlb.c Thu Feb 23 20:13:06 2012
> +++ b/mm/hugetlb.c Thu Feb 23 20:30:16 2012
> @@ -2245,16 +2245,23 @@ void __unmap_hugepage_range(struct vm_ar
> if (huge_pmd_unshare(mm, &address, ptep))
> continue;
>
> + pte = huge_ptep_get(ptep);
> + if (huge_pte_none(pte))
> + continue;
> +
> + /*
> + * HWPoisoned hugepage is already unmapped and dropped reference
> + */
> + if (unlikely(is_hugetlb_entry_hwpoisoned(pte)))
> + continue;
> +
> + page = pte_page(pte);
> /*
> * If a reference page is supplied, it is because a specific
> * page is being unmapped, not a range. Ensure the page we
> * are about to unmap is the actual page of interest.
> */
> if (ref_page) {
> - pte = huge_ptep_get(ptep);
> - if (huge_pte_none(pte))
> - continue;
> - page = pte_page(pte);
> if (page != ref_page)
> continue;
>
> @@ -2267,16 +2274,6 @@ void __unmap_hugepage_range(struct vm_ar
> }
>
> pte = huge_ptep_get_and_clear(mm, address, ptep);
> - if (huge_pte_none(pte))
> - continue;
> -
> - /*
> - * HWPoisoned hugepage is already unmapped and dropped reference
> - */
> - if (unlikely(is_hugetlb_entry_hwpoisoned(pte)))
> - continue;
> -
> - page = pte_page(pte);
> if (pte_dirty(pte))
> set_page_dirty(page);
> list_add(&page->lru, &page_list);
This changes behaviour when ref_page refers to a hwpoisoned page.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm: hugetlb: bail out unmapping after serving reference page
2012-02-23 20:12 ` Andrew Morton
@ 2012-03-21 21:00 ` Andrew Morton
2012-03-22 13:21 ` Hillf Danton
0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2012-03-21 21:00 UTC (permalink / raw)
To: Hillf Danton, Linux-MM, LKML, Michal Hocko, KAMEZAWA Hiroyuki,
Hugh Dickins
On Thu, 23 Feb 2012 12:12:38 -0800
Andrew Morton <akpm@linux-foundation.org> wrote:
> On Thu, 23 Feb 2012 21:05:41 +0800
> Hillf Danton <dhillf@gmail.com> wrote:
>
> > and a follow-up cleanup also attached.
>
> Please, never put more than one patches in an email - it is rather a
> pain to manually unpick everything.
>
> > When unmapping given VM range, a couple of code duplicate, such as pte_page()
> > and huge_pte_none(), so a cleanup needed to compact them together.
> >
> > Signed-off-by: Hillf Danton <dhillf@gmail.com>
> > ---
> >
> > --- a/mm/hugetlb.c Thu Feb 23 20:13:06 2012
> > +++ b/mm/hugetlb.c Thu Feb 23 20:30:16 2012
> > @@ -2245,16 +2245,23 @@ void __unmap_hugepage_range(struct vm_ar
> > if (huge_pmd_unshare(mm, &address, ptep))
> > continue;
> >
> > + pte = huge_ptep_get(ptep);
> > + if (huge_pte_none(pte))
> > + continue;
> > +
> > + /*
> > + * HWPoisoned hugepage is already unmapped and dropped reference
> > + */
> > + if (unlikely(is_hugetlb_entry_hwpoisoned(pte)))
> > + continue;
> > +
> > + page = pte_page(pte);
> > /*
> > * If a reference page is supplied, it is because a specific
> > * page is being unmapped, not a range. Ensure the page we
> > * are about to unmap is the actual page of interest.
> > */
> > if (ref_page) {
> > - pte = huge_ptep_get(ptep);
> > - if (huge_pte_none(pte))
> > - continue;
> > - page = pte_page(pte);
> > if (page != ref_page)
> > continue;
> >
> > @@ -2267,16 +2274,6 @@ void __unmap_hugepage_range(struct vm_ar
> > }
> >
> > pte = huge_ptep_get_and_clear(mm, address, ptep);
> > - if (huge_pte_none(pte))
> > - continue;
> > -
> > - /*
> > - * HWPoisoned hugepage is already unmapped and dropped reference
> > - */
> > - if (unlikely(is_hugetlb_entry_hwpoisoned(pte)))
> > - continue;
> > -
> > - page = pte_page(pte);
> > if (pte_dirty(pte))
> > set_page_dirty(page);
> > list_add(&page->lru, &page_list);
>
> This changes behaviour when ref_page refers to a hwpoisoned page.
Respond, please?
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm: hugetlb: bail out unmapping after serving reference page
2012-03-21 21:00 ` Andrew Morton
@ 2012-03-22 13:21 ` Hillf Danton
0 siblings, 0 replies; 6+ messages in thread
From: Hillf Danton @ 2012-03-22 13:21 UTC (permalink / raw)
To: Andrew Morton
Cc: Linux-MM, LKML, Michal Hocko, KAMEZAWA Hiroyuki, Hugh Dickins
On Thu, Mar 22, 2012 at 5:00 AM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Thu, 23 Feb 2012 12:12:38 -0800
> Andrew Morton <akpm@linux-foundation.org> wrote:
>
>> On Thu, 23 Feb 2012 21:05:41 +0800
>> Hillf Danton <dhillf@gmail.com> wrote:
>>
>> > and a follow-up cleanup also attached.
>>
>> Please, never put more than one patches in an email - it is rather a
>> pain to manually unpick everything.
>>
>> > When unmapping given VM range, a couple of code duplicate, such as pte_page()
>> > and huge_pte_none(), so a cleanup needed to compact them together.
>> >
>> > Signed-off-by: Hillf Danton <dhillf@gmail.com>
>> > ---
>> >
>> > --- a/mm/hugetlb.c Thu Feb 23 20:13:06 2012
>> > +++ b/mm/hugetlb.c Thu Feb 23 20:30:16 2012
>> > @@ -2245,16 +2245,23 @@ void __unmap_hugepage_range(struct vm_ar
>> > if (huge_pmd_unshare(mm, &address, ptep))
>> > continue;
>> >
>> > + pte = huge_ptep_get(ptep);
>> > + if (huge_pte_none(pte))
>> > + continue;
>> > +
>> > + /*
>> > + * HWPoisoned hugepage is already unmapped and dropped reference
>> > + */
>> > + if (unlikely(is_hugetlb_entry_hwpoisoned(pte)))
>> > + continue;
>> > +
>> > + page = pte_page(pte);
>> > /*
>> > * If a reference page is supplied, it is because a specific
>> > * page is being unmapped, not a range. Ensure the page we
>> > * are about to unmap is the actual page of interest.
>> > */
>> > if (ref_page) {
>> > - pte = huge_ptep_get(ptep);
>> > - if (huge_pte_none(pte))
>> > - continue;
>> > - page = pte_page(pte);
>> > if (page != ref_page)
>> > continue;
>> >
>> > @@ -2267,16 +2274,6 @@ void __unmap_hugepage_range(struct vm_ar
>> > }
>> >
>> > pte = huge_ptep_get_and_clear(mm, address, ptep);
>> > - if (huge_pte_none(pte))
>> > - continue;
>> > -
>> > - /*
>> > - * HWPoisoned hugepage is already unmapped and dropped reference
>> > - */
>> > - if (unlikely(is_hugetlb_entry_hwpoisoned(pte)))
>> > - continue;
>> > -
>> > - page = pte_page(pte);
>> > if (pte_dirty(pte))
>> > set_page_dirty(page);
>> > list_add(&page->lru, &page_list);
>>
>> This changes behaviour when ref_page refers to a hwpoisoned page.
>
> Respond, please?
First say sorry to you, Andrew.
The comment says, HWPoisoned hugepage is already unmapped;
and even if ref_page == HWPoisoned page, it is not added onto
page_list and no page_remove_rmap() is issued for it, so we end
up with no behavior change.
Thanks
-hd
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2012-03-22 13:21 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-22 12:35 [PATCH] mm: hugetlb: bail out unmapping after serving reference page Hillf Danton
2012-02-22 21:06 ` Andrew Morton
2012-02-23 13:05 ` Hillf Danton
2012-02-23 20:12 ` Andrew Morton
2012-03-21 21:00 ` Andrew Morton
2012-03-22 13:21 ` Hillf Danton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).