All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch resend] mm: page_alloc: fix zone allocation fairness on UP
@ 2014-09-09 13:15 ` Johannes Weiner
  0 siblings, 0 replies; 9+ messages in thread
From: Johannes Weiner @ 2014-09-09 13:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Leon Romanovsky, linux-mm, linux-kernel

The zone allocation batches can easily underflow due to higher-order
allocations or spills to remote nodes.  On SMP that's fine, because
underflows are expected from concurrency and dealt with by returning
0.  But on UP, zone_page_state will just return a wrapped unsigned
long, which will get past the <= 0 check and then consider the zone
eligible until its watermarks are hit.

3a025760fc15 ("mm: page_alloc: spill to remote nodes before waking
kswapd") already made the counter-resetting use atomic_long_read() to
accomodate underflows from remote spills, but it didn't go all the way
with it.  Make it clear that these batches are expected to go negative
regardless of concurrency, and use atomic_long_read() everywhere.

Fixes: 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy")
Reported-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Leon Romanovsky <leon@leon.nu>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: "3.12+" <stable@kernel.org>
---
 mm/page_alloc.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

Sorry I forgot to CC you, Leon.  Resend with updated Tags.

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 18cee0d4c8a2..eee961958021 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1612,7 +1612,7 @@ again:
 	}
 
 	__mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
-	if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 &&
+	if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0 &&
 	    !zone_is_fair_depleted(zone))
 		zone_set_flag(zone, ZONE_FAIR_DEPLETED);
 
@@ -5701,9 +5701,8 @@ static void __setup_per_zone_wmarks(void)
 		zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1);
 
 		__mod_zone_page_state(zone, NR_ALLOC_BATCH,
-				      high_wmark_pages(zone) -
-				      low_wmark_pages(zone) -
-				      zone_page_state(zone, NR_ALLOC_BATCH));
+			high_wmark_pages(zone) - low_wmark_pages(zone) -
+			atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
 
 		setup_zone_migrate_reserve(zone);
 		spin_unlock_irqrestore(&zone->lock, flags);
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [patch resend] mm: page_alloc: fix zone allocation fairness on UP
@ 2014-09-09 13:15 ` Johannes Weiner
  0 siblings, 0 replies; 9+ messages in thread
From: Johannes Weiner @ 2014-09-09 13:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Leon Romanovsky, linux-mm, linux-kernel

The zone allocation batches can easily underflow due to higher-order
allocations or spills to remote nodes.  On SMP that's fine, because
underflows are expected from concurrency and dealt with by returning
0.  But on UP, zone_page_state will just return a wrapped unsigned
long, which will get past the <= 0 check and then consider the zone
eligible until its watermarks are hit.

3a025760fc15 ("mm: page_alloc: spill to remote nodes before waking
kswapd") already made the counter-resetting use atomic_long_read() to
accomodate underflows from remote spills, but it didn't go all the way
with it.  Make it clear that these batches are expected to go negative
regardless of concurrency, and use atomic_long_read() everywhere.

Fixes: 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy")
Reported-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Leon Romanovsky <leon@leon.nu>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: "3.12+" <stable@kernel.org>
---
 mm/page_alloc.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

Sorry I forgot to CC you, Leon.  Resend with updated Tags.

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 18cee0d4c8a2..eee961958021 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1612,7 +1612,7 @@ again:
 	}
 
 	__mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
-	if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 &&
+	if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0 &&
 	    !zone_is_fair_depleted(zone))
 		zone_set_flag(zone, ZONE_FAIR_DEPLETED);
 
@@ -5701,9 +5701,8 @@ static void __setup_per_zone_wmarks(void)
 		zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1);
 
 		__mod_zone_page_state(zone, NR_ALLOC_BATCH,
-				      high_wmark_pages(zone) -
-				      low_wmark_pages(zone) -
-				      zone_page_state(zone, NR_ALLOC_BATCH));
+			high_wmark_pages(zone) - low_wmark_pages(zone) -
+			atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
 
 		setup_zone_migrate_reserve(zone);
 		spin_unlock_irqrestore(&zone->lock, flags);
-- 
2.0.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [patch resend] mm: page_alloc: fix zone allocation fairness on UP
  2014-09-09 13:15 ` Johannes Weiner
  (?)
@ 2014-09-10  4:32 ` Leon Romanovsky
  2014-09-11 12:36     ` Johannes Weiner
  -1 siblings, 1 reply; 9+ messages in thread
From: Leon Romanovsky @ 2014-09-10  4:32 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Linux-MM, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3103 bytes --]

Hi Johaness,


On Tue, Sep 9, 2014 at 4:15 PM, Johannes Weiner <hannes@cmpxchg.org> wrote:

> The zone allocation batches can easily underflow due to higher-order
> allocations or spills to remote nodes.  On SMP that's fine, because
> underflows are expected from concurrency and dealt with by returning
> 0.  But on UP, zone_page_state will just return a wrapped unsigned
> long, which will get past the <= 0 check and then consider the zone
> eligible until its watermarks are hit.
>
> 3a025760fc15 ("mm: page_alloc: spill to remote nodes before waking
> kswapd") already made the counter-resetting use atomic_long_read() to
> accomodate underflows from remote spills, but it didn't go all the way
> with it.  Make it clear that these batches are expected to go negative
> regardless of concurrency, and use atomic_long_read() everywhere.
>
> Fixes: 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy")
> Reported-by: Vlastimil Babka <vbabka@suse.cz>
> Reported-by: Leon Romanovsky <leon@leon.nu>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> Acked-by: Mel Gorman <mgorman@suse.de>
> Cc: "3.12+" <stable@kernel.org>
> ---
>  mm/page_alloc.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
>
> Sorry I forgot to CC you, Leon.  Resend with updated Tags.
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 18cee0d4c8a2..eee961958021 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1612,7 +1612,7 @@ again:
>         }
>
>         __mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
> -       if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 &&
> +       if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0 &&
>             !zone_is_fair_depleted(zone))
>                 zone_set_flag(zone, ZONE_FAIR_DEPLETED);
>
> @@ -5701,9 +5701,8 @@ static void __setup_per_zone_wmarks(void)
>                 zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp
> >> 1);
>
>                 __mod_zone_page_state(zone, NR_ALLOC_BATCH,
> -                                     high_wmark_pages(zone) -
> -                                     low_wmark_pages(zone) -
> -                                     zone_page_state(zone,
> NR_ALLOC_BATCH));
> +                       high_wmark_pages(zone) - low_wmark_pages(zone) -
> +                       atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
>
>                 setup_zone_migrate_reserve(zone);
>                 spin_unlock_irqrestore(&zone->lock, flags);
>

I think the better way will be to apply Mel's patch
https://lkml.org/lkml/2014/9/8/214 which fix zone_page_state shadow casting
issue and convert all atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])) to
zone_page__state(zone, NR_ALLOC_BATCH). This move will unify access to
vm_stat.



> --
> 2.0.4
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>



-- 
Leon Romanovsky | Independent Linux Consultant
        www.leon.nu | leon@leon.nu

[-- Attachment #2: Type: text/html, Size: 4725 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [patch resend] mm: page_alloc: fix zone allocation fairness on UP
  2014-09-10  4:32 ` Leon Romanovsky
@ 2014-09-11 12:36     ` Johannes Weiner
  0 siblings, 0 replies; 9+ messages in thread
From: Johannes Weiner @ 2014-09-11 12:36 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Linux-MM, linux-kernel

On Wed, Sep 10, 2014 at 07:32:20AM +0300, Leon Romanovsky wrote:
> Hi Johaness,
> 
> 
> On Tue, Sep 9, 2014 at 4:15 PM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> 
> > The zone allocation batches can easily underflow due to higher-order
> > allocations or spills to remote nodes.  On SMP that's fine, because
> > underflows are expected from concurrency and dealt with by returning
> > 0.  But on UP, zone_page_state will just return a wrapped unsigned
> > long, which will get past the <= 0 check and then consider the zone
> > eligible until its watermarks are hit.
> >
> > 3a025760fc15 ("mm: page_alloc: spill to remote nodes before waking
> > kswapd") already made the counter-resetting use atomic_long_read() to
> > accomodate underflows from remote spills, but it didn't go all the way
> > with it.  Make it clear that these batches are expected to go negative
> > regardless of concurrency, and use atomic_long_read() everywhere.
> >
> > Fixes: 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy")
> > Reported-by: Vlastimil Babka <vbabka@suse.cz>
> > Reported-by: Leon Romanovsky <leon@leon.nu>
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > Acked-by: Mel Gorman <mgorman@suse.de>
> > Cc: "3.12+" <stable@kernel.org>
> > ---
> >  mm/page_alloc.c | 7 +++----
> >  1 file changed, 3 insertions(+), 4 deletions(-)
> >
> > Sorry I forgot to CC you, Leon.  Resend with updated Tags.
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 18cee0d4c8a2..eee961958021 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1612,7 +1612,7 @@ again:
> >         }
> >
> >         __mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
> > -       if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 &&
> > +       if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0 &&
> >             !zone_is_fair_depleted(zone))
> >                 zone_set_flag(zone, ZONE_FAIR_DEPLETED);
> >
> > @@ -5701,9 +5701,8 @@ static void __setup_per_zone_wmarks(void)
> >                 zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp
> > >> 1);
> >
> >                 __mod_zone_page_state(zone, NR_ALLOC_BATCH,
> > -                                     high_wmark_pages(zone) -
> > -                                     low_wmark_pages(zone) -
> > -                                     zone_page_state(zone,
> > NR_ALLOC_BATCH));
> > +                       high_wmark_pages(zone) - low_wmark_pages(zone) -
> > +                       atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
> >
> >                 setup_zone_migrate_reserve(zone);
> >                 spin_unlock_irqrestore(&zone->lock, flags);
> >
> 
> I think the better way will be to apply Mel's patch
> https://lkml.org/lkml/2014/9/8/214 which fix zone_page_state shadow casting
> issue and convert all atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])) to
> zone_page__state(zone, NR_ALLOC_BATCH). This move will unify access to
> vm_stat.

It's not as simple.  The counter can go way negative and we need that
negative number, not 0, to calculate the reset delta.  As I said in
response to Mel's patch, we could make the vmstat API signed but I'm
not convinced that is reasonable, given the 99% majority of usecases.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [patch resend] mm: page_alloc: fix zone allocation fairness on UP
@ 2014-09-11 12:36     ` Johannes Weiner
  0 siblings, 0 replies; 9+ messages in thread
From: Johannes Weiner @ 2014-09-11 12:36 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Linux-MM, linux-kernel

On Wed, Sep 10, 2014 at 07:32:20AM +0300, Leon Romanovsky wrote:
> Hi Johaness,
> 
> 
> On Tue, Sep 9, 2014 at 4:15 PM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> 
> > The zone allocation batches can easily underflow due to higher-order
> > allocations or spills to remote nodes.  On SMP that's fine, because
> > underflows are expected from concurrency and dealt with by returning
> > 0.  But on UP, zone_page_state will just return a wrapped unsigned
> > long, which will get past the <= 0 check and then consider the zone
> > eligible until its watermarks are hit.
> >
> > 3a025760fc15 ("mm: page_alloc: spill to remote nodes before waking
> > kswapd") already made the counter-resetting use atomic_long_read() to
> > accomodate underflows from remote spills, but it didn't go all the way
> > with it.  Make it clear that these batches are expected to go negative
> > regardless of concurrency, and use atomic_long_read() everywhere.
> >
> > Fixes: 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy")
> > Reported-by: Vlastimil Babka <vbabka@suse.cz>
> > Reported-by: Leon Romanovsky <leon@leon.nu>
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > Acked-by: Mel Gorman <mgorman@suse.de>
> > Cc: "3.12+" <stable@kernel.org>
> > ---
> >  mm/page_alloc.c | 7 +++----
> >  1 file changed, 3 insertions(+), 4 deletions(-)
> >
> > Sorry I forgot to CC you, Leon.  Resend with updated Tags.
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 18cee0d4c8a2..eee961958021 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1612,7 +1612,7 @@ again:
> >         }
> >
> >         __mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
> > -       if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 &&
> > +       if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0 &&
> >             !zone_is_fair_depleted(zone))
> >                 zone_set_flag(zone, ZONE_FAIR_DEPLETED);
> >
> > @@ -5701,9 +5701,8 @@ static void __setup_per_zone_wmarks(void)
> >                 zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp
> > >> 1);
> >
> >                 __mod_zone_page_state(zone, NR_ALLOC_BATCH,
> > -                                     high_wmark_pages(zone) -
> > -                                     low_wmark_pages(zone) -
> > -                                     zone_page_state(zone,
> > NR_ALLOC_BATCH));
> > +                       high_wmark_pages(zone) - low_wmark_pages(zone) -
> > +                       atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
> >
> >                 setup_zone_migrate_reserve(zone);
> >                 spin_unlock_irqrestore(&zone->lock, flags);
> >
> 
> I think the better way will be to apply Mel's patch
> https://lkml.org/lkml/2014/9/8/214 which fix zone_page_state shadow casting
> issue and convert all atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])) to
> zone_page__state(zone, NR_ALLOC_BATCH). This move will unify access to
> vm_stat.

It's not as simple.  The counter can go way negative and we need that
negative number, not 0, to calculate the reset delta.  As I said in
response to Mel's patch, we could make the vmstat API signed but I'm
not convinced that is reasonable, given the 99% majority of usecases.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [patch resend] mm: page_alloc: fix zone allocation fairness on UP
  2014-09-11 12:36     ` Johannes Weiner
@ 2014-09-11 12:50       ` Leon Romanovsky
  -1 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2014-09-11 12:50 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Linux-MM, linux-kernel

On Thu, Sep 11, 2014 at 3:36 PM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> On Wed, Sep 10, 2014 at 07:32:20AM +0300, Leon Romanovsky wrote:
>> Hi Johaness,
>>
>>
>> On Tue, Sep 9, 2014 at 4:15 PM, Johannes Weiner <hannes@cmpxchg.org> wrote:
>>
>> > The zone allocation batches can easily underflow due to higher-order
>> > allocations or spills to remote nodes.  On SMP that's fine, because
>> > underflows are expected from concurrency and dealt with by returning
>> > 0.  But on UP, zone_page_state will just return a wrapped unsigned
>> > long, which will get past the <= 0 check and then consider the zone
>> > eligible until its watermarks are hit.
>> >
>> > 3a025760fc15 ("mm: page_alloc: spill to remote nodes before waking
>> > kswapd") already made the counter-resetting use atomic_long_read() to
>> > accomodate underflows from remote spills, but it didn't go all the way
>> > with it.  Make it clear that these batches are expected to go negative
>> > regardless of concurrency, and use atomic_long_read() everywhere.
>> >
>> > Fixes: 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy")
>> > Reported-by: Vlastimil Babka <vbabka@suse.cz>
>> > Reported-by: Leon Romanovsky <leon@leon.nu>
>> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
>> > Acked-by: Mel Gorman <mgorman@suse.de>
>> > Cc: "3.12+" <stable@kernel.org>
>> > ---
>> >  mm/page_alloc.c | 7 +++----
>> >  1 file changed, 3 insertions(+), 4 deletions(-)
>> >
>> > Sorry I forgot to CC you, Leon.  Resend with updated Tags.
>> >
>> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> > index 18cee0d4c8a2..eee961958021 100644
>> > --- a/mm/page_alloc.c
>> > +++ b/mm/page_alloc.c
>> > @@ -1612,7 +1612,7 @@ again:
>> >         }
>> >
>> >         __mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
>> > -       if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 &&
>> > +       if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0 &&
>> >             !zone_is_fair_depleted(zone))
>> >                 zone_set_flag(zone, ZONE_FAIR_DEPLETED);
>> >
>> > @@ -5701,9 +5701,8 @@ static void __setup_per_zone_wmarks(void)
>> >                 zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp
>> > >> 1);
>> >
>> >                 __mod_zone_page_state(zone, NR_ALLOC_BATCH,
>> > -                                     high_wmark_pages(zone) -
>> > -                                     low_wmark_pages(zone) -
>> > -                                     zone_page_state(zone,
>> > NR_ALLOC_BATCH));
>> > +                       high_wmark_pages(zone) - low_wmark_pages(zone) -
>> > +                       atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
>> >
>> >                 setup_zone_migrate_reserve(zone);
>> >                 spin_unlock_irqrestore(&zone->lock, flags);
>> >
>>
>> I think the better way will be to apply Mel's patch
>> https://lkml.org/lkml/2014/9/8/214 which fix zone_page_state shadow casting
>> issue and convert all atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])) to
>> zone_page__state(zone, NR_ALLOC_BATCH). This move will unify access to
>> vm_stat.
>
> It's not as simple.  The counter can go way negative and we need that
> negative number, not 0, to calculate the reset delta.  As I said in
> response to Mel's patch, we could make the vmstat API signed but I'm
> not convinced that is reasonable, given the 99% majority of usecases.
You are right, I missed that NR_ALLOC_BATCH is in use as a part of calculations
+                       high_wmark_pages(zone) - low_wmark_pages(zone) -
+                       atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
Sorry


-- 
Leon Romanovsky | Independent Linux Consultant
        www.leon.nu | leon@leon.nu

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [patch resend] mm: page_alloc: fix zone allocation fairness on UP
@ 2014-09-11 12:50       ` Leon Romanovsky
  0 siblings, 0 replies; 9+ messages in thread
From: Leon Romanovsky @ 2014-09-11 12:50 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Linux-MM, linux-kernel

On Thu, Sep 11, 2014 at 3:36 PM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> On Wed, Sep 10, 2014 at 07:32:20AM +0300, Leon Romanovsky wrote:
>> Hi Johaness,
>>
>>
>> On Tue, Sep 9, 2014 at 4:15 PM, Johannes Weiner <hannes@cmpxchg.org> wrote:
>>
>> > The zone allocation batches can easily underflow due to higher-order
>> > allocations or spills to remote nodes.  On SMP that's fine, because
>> > underflows are expected from concurrency and dealt with by returning
>> > 0.  But on UP, zone_page_state will just return a wrapped unsigned
>> > long, which will get past the <= 0 check and then consider the zone
>> > eligible until its watermarks are hit.
>> >
>> > 3a025760fc15 ("mm: page_alloc: spill to remote nodes before waking
>> > kswapd") already made the counter-resetting use atomic_long_read() to
>> > accomodate underflows from remote spills, but it didn't go all the way
>> > with it.  Make it clear that these batches are expected to go negative
>> > regardless of concurrency, and use atomic_long_read() everywhere.
>> >
>> > Fixes: 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy")
>> > Reported-by: Vlastimil Babka <vbabka@suse.cz>
>> > Reported-by: Leon Romanovsky <leon@leon.nu>
>> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
>> > Acked-by: Mel Gorman <mgorman@suse.de>
>> > Cc: "3.12+" <stable@kernel.org>
>> > ---
>> >  mm/page_alloc.c | 7 +++----
>> >  1 file changed, 3 insertions(+), 4 deletions(-)
>> >
>> > Sorry I forgot to CC you, Leon.  Resend with updated Tags.
>> >
>> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> > index 18cee0d4c8a2..eee961958021 100644
>> > --- a/mm/page_alloc.c
>> > +++ b/mm/page_alloc.c
>> > @@ -1612,7 +1612,7 @@ again:
>> >         }
>> >
>> >         __mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
>> > -       if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 &&
>> > +       if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0 &&
>> >             !zone_is_fair_depleted(zone))
>> >                 zone_set_flag(zone, ZONE_FAIR_DEPLETED);
>> >
>> > @@ -5701,9 +5701,8 @@ static void __setup_per_zone_wmarks(void)
>> >                 zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp
>> > >> 1);
>> >
>> >                 __mod_zone_page_state(zone, NR_ALLOC_BATCH,
>> > -                                     high_wmark_pages(zone) -
>> > -                                     low_wmark_pages(zone) -
>> > -                                     zone_page_state(zone,
>> > NR_ALLOC_BATCH));
>> > +                       high_wmark_pages(zone) - low_wmark_pages(zone) -
>> > +                       atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
>> >
>> >                 setup_zone_migrate_reserve(zone);
>> >                 spin_unlock_irqrestore(&zone->lock, flags);
>> >
>>
>> I think the better way will be to apply Mel's patch
>> https://lkml.org/lkml/2014/9/8/214 which fix zone_page_state shadow casting
>> issue and convert all atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])) to
>> zone_page__state(zone, NR_ALLOC_BATCH). This move will unify access to
>> vm_stat.
>
> It's not as simple.  The counter can go way negative and we need that
> negative number, not 0, to calculate the reset delta.  As I said in
> response to Mel's patch, we could make the vmstat API signed but I'm
> not convinced that is reasonable, given the 99% majority of usecases.
You are right, I missed that NR_ALLOC_BATCH is in use as a part of calculations
+                       high_wmark_pages(zone) - low_wmark_pages(zone) -
+                       atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
Sorry


-- 
Leon Romanovsky | Independent Linux Consultant
        www.leon.nu | leon@leon.nu

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [patch resend] mm: page_alloc: fix zone allocation fairness on UP
  2014-09-11 12:50       ` Leon Romanovsky
@ 2014-10-03  8:35         ` Christoph Lameter
  -1 siblings, 0 replies; 9+ messages in thread
From: Christoph Lameter @ 2014-10-03  8:35 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Johannes Weiner, Andrew Morton, Mel Gorman, Vlastimil Babka,
	Linux-MM, linux-kernel

On Thu, 11 Sep 2014, Leon Romanovsky wrote:

> >> I think the better way will be to apply Mel's patch
> >> https://lkml.org/lkml/2014/9/8/214 which fix zone_page_state shadow casting
> >> issue and convert all atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])) to
> >> zone_page__state(zone, NR_ALLOC_BATCH). This move will unify access to
> >> vm_stat.
> >
> > It's not as simple.  The counter can go way negative and we need that
> > negative number, not 0, to calculate the reset delta.  As I said in
> > response to Mel's patch, we could make the vmstat API signed but I'm
> > not convinced that is reasonable, given the 99% majority of usecases.
> You are right, I missed that NR_ALLOC_BATCH is in use as a part of calculations
> +                       high_wmark_pages(zone) - low_wmark_pages(zone) -
> +                       atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));

How about creating __zone_page_state for zone_page_state without the 0
check? That would be much nicer and would move the stuff to a central
place. Given the nastiness of this issue there are bound to be more fixes
coming up.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [patch resend] mm: page_alloc: fix zone allocation fairness on UP
@ 2014-10-03  8:35         ` Christoph Lameter
  0 siblings, 0 replies; 9+ messages in thread
From: Christoph Lameter @ 2014-10-03  8:35 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Johannes Weiner, Andrew Morton, Mel Gorman, Vlastimil Babka,
	Linux-MM, linux-kernel

On Thu, 11 Sep 2014, Leon Romanovsky wrote:

> >> I think the better way will be to apply Mel's patch
> >> https://lkml.org/lkml/2014/9/8/214 which fix zone_page_state shadow casting
> >> issue and convert all atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])) to
> >> zone_page__state(zone, NR_ALLOC_BATCH). This move will unify access to
> >> vm_stat.
> >
> > It's not as simple.  The counter can go way negative and we need that
> > negative number, not 0, to calculate the reset delta.  As I said in
> > response to Mel's patch, we could make the vmstat API signed but I'm
> > not convinced that is reasonable, given the 99% majority of usecases.
> You are right, I missed that NR_ALLOC_BATCH is in use as a part of calculations
> +                       high_wmark_pages(zone) - low_wmark_pages(zone) -
> +                       atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));

How about creating __zone_page_state for zone_page_state without the 0
check? That would be much nicer and would move the stuff to a central
place. Given the nastiness of this issue there are bound to be more fixes
coming up.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2014-10-03  8:35 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-09 13:15 [patch resend] mm: page_alloc: fix zone allocation fairness on UP Johannes Weiner
2014-09-09 13:15 ` Johannes Weiner
2014-09-10  4:32 ` Leon Romanovsky
2014-09-11 12:36   ` Johannes Weiner
2014-09-11 12:36     ` Johannes Weiner
2014-09-11 12:50     ` Leon Romanovsky
2014-09-11 12:50       ` Leon Romanovsky
2014-10-03  8:35       ` Christoph Lameter
2014-10-03  8:35         ` Christoph Lameter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.