All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
@ 2009-02-09 22:24 ` Johannes Weiner
  0 siblings, 0 replies; 44+ messages in thread
From: Johannes Weiner @ 2009-02-09 22:24 UTC (permalink / raw)
  To: Rik van Riel
  Cc: William Lee Irwin III, KOSAKI Motohiro, Andrew Morton,
	linux-kernel, linux-mm

Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
counter into the scan control to accumulate the number of all
reclaimed pages in one direct reclaim invocation.

The commit missed to actually adjust do_try_to_free_pages() which now
does not initialize sc.nr_reclaimed and makes shrink_zone() make
assumptions on whether to bail out of the reclaim cycle based on an
uninitialized value.

Fix it up by initializing the counter to zero before entering the
priority loop.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/vmscan.c |    1 +
 1 file changed, 1 insertion(+)

The comment of the .nr_reclaimed field says it accumulates the reclaim
counter over ONE shrink_zones() call.  This means, we should break out
if ONE shrink_zones() call alone does more than swap_cluster_max.

OTOH, the patch title suggests that we break out if ALL shrink_zones()
calls in the priority loop have reclaimed that much.  I.e.
accumulating the reclaimed number over the prio loop, not just over
one zones iteration.

>From the patch description I couldn't really make sure what the
intended behaviour was.

So, should the sc.nr_reclaimed be reset before the prio loop or in
each iteration of the prio loop?

Either this patch is wrong or the comment above .nr_reclaimed is.

And why didn't this have any observable effects?  Do I miss something
really obvious here?

--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1618,6 +1618,7 @@ static unsigned long do_try_to_free_page
 		}
 	}
 
+	sc->nr_reclaimed = 0;
 	for (priority = DEF_PRIORITY; priority >= 0; priority--) {
 		sc->nr_scanned = 0;
 		if (!priority)

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
@ 2009-02-09 22:24 ` Johannes Weiner
  0 siblings, 0 replies; 44+ messages in thread
From: Johannes Weiner @ 2009-02-09 22:24 UTC (permalink / raw)
  To: Rik van Riel
  Cc: William Lee Irwin III, KOSAKI Motohiro, Andrew Morton,
	linux-kernel, linux-mm

Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
counter into the scan control to accumulate the number of all
reclaimed pages in one direct reclaim invocation.

The commit missed to actually adjust do_try_to_free_pages() which now
does not initialize sc.nr_reclaimed and makes shrink_zone() make
assumptions on whether to bail out of the reclaim cycle based on an
uninitialized value.

Fix it up by initializing the counter to zero before entering the
priority loop.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/vmscan.c |    1 +
 1 file changed, 1 insertion(+)

The comment of the .nr_reclaimed field says it accumulates the reclaim
counter over ONE shrink_zones() call.  This means, we should break out
if ONE shrink_zones() call alone does more than swap_cluster_max.

OTOH, the patch title suggests that we break out if ALL shrink_zones()
calls in the priority loop have reclaimed that much.  I.e.
accumulating the reclaimed number over the prio loop, not just over
one zones iteration.

>From the patch description I couldn't really make sure what the
intended behaviour was.

So, should the sc.nr_reclaimed be reset before the prio loop or in
each iteration of the prio loop?

Either this patch is wrong or the comment above .nr_reclaimed is.

And why didn't this have any observable effects?  Do I miss something
really obvious here?

--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1618,6 +1618,7 @@ static unsigned long do_try_to_free_page
 		}
 	}
 
+	sc->nr_reclaimed = 0;
 	for (priority = DEF_PRIORITY; priority >= 0; priority--) {
 		sc->nr_scanned = 0;
 		if (!priority)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
  2009-02-09 22:24 ` Johannes Weiner
@ 2009-02-10 10:47   ` MinChan Kim
  -1 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-10 10:47 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Rik van Riel, William Lee Irwin III, KOSAKI Motohiro,
	Andrew Morton, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 7:24 AM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
> direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
> counter into the scan control to accumulate the number of all
> reclaimed pages in one direct reclaim invocation.
>
> The commit missed to actually adjust do_try_to_free_pages() which now
> does not initialize sc.nr_reclaimed and makes shrink_zone() make
> assumptions on whether to bail out of the reclaim cycle based on an
> uninitialized value.
>
> Fix it up by initializing the counter to zero before entering the
> priority loop.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> ---
>  mm/vmscan.c |    1 +
>  1 file changed, 1 insertion(+)
>
> The comment of the .nr_reclaimed field says it accumulates the reclaim
> counter over ONE shrink_zones() call.  This means, we should break out
> if ONE shrink_zones() call alone does more than swap_cluster_max.
>
> OTOH, the patch title suggests that we break out if ALL shrink_zones()
> calls in the priority loop have reclaimed that much.  I.e.
> accumulating the reclaimed number over the prio loop, not just over
> one zones iteration.
>
> From the patch description I couldn't really make sure what the
> intended behaviour was.
>
> So, should the sc.nr_reclaimed be reset before the prio loop or in
> each iteration of the prio loop?
>
> Either this patch is wrong or the comment above .nr_reclaimed is.
>
> And why didn't this have any observable effects?  Do I miss something

Nice catch!!
I think that's because situation Rik said is unusual.

> really obvious here?


> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1618,6 +1618,7 @@ static unsigned long do_try_to_free_page
>                }
>        }
>
> +       sc->nr_reclaimed = 0;
>        for (priority = DEF_PRIORITY; priority >= 0; priority--) {
>                sc->nr_scanned = 0;
>                if (!priority)
>
> --

I have a one comment.

If you directly initialize nr_reclaimed in do_try_to_free_pages function,
it might be a side effect.
Because old functions use scan_control declaration and initialization
method for initializing scan_control before calling
do_try_to_free_pages.

In future, If some function call do_try_to_free_pages after
scan_control declaration and initialization of nr_reclaimed, your
patch implementation reset nr_reclaimed to zero forcely, again.

but I think it is unlikely that it initializes nr_reclaimed with not zero. :(

But, like old functions, way to declaration and initialization is good
for readability and portability, I think.

Make sure below code is mangled and word-wrapped.
It just is example.

---
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9a27c44..18406ee 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1699,6 +1699,7 @@ unsigned long try_to_free_pages(struct zonelist
*zonelist, int order,
                .order = order,
                .mem_cgroup = NULL,
                .isolate_pages = isolate_pages_global,
+               .nr_reclaimed = 0,
        };

        return do_try_to_free_pages(zonelist, &sc);
@@ -1719,6 +1720,7 @@ unsigned long
try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,
                .order = 0,
                .mem_cgroup = mem_cont,
                .isolate_pages = mem_cgroup_isolate_pages,
+               .nr_reclaimed = 0;
        };
        struct zonelist *zonelist;



-- 
Kinds regards,
MinChan Kim

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
@ 2009-02-10 10:47   ` MinChan Kim
  0 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-10 10:47 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Rik van Riel, William Lee Irwin III, KOSAKI Motohiro,
	Andrew Morton, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 7:24 AM, Johannes Weiner <hannes@cmpxchg.org> wrote:
> Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
> direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
> counter into the scan control to accumulate the number of all
> reclaimed pages in one direct reclaim invocation.
>
> The commit missed to actually adjust do_try_to_free_pages() which now
> does not initialize sc.nr_reclaimed and makes shrink_zone() make
> assumptions on whether to bail out of the reclaim cycle based on an
> uninitialized value.
>
> Fix it up by initializing the counter to zero before entering the
> priority loop.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> ---
>  mm/vmscan.c |    1 +
>  1 file changed, 1 insertion(+)
>
> The comment of the .nr_reclaimed field says it accumulates the reclaim
> counter over ONE shrink_zones() call.  This means, we should break out
> if ONE shrink_zones() call alone does more than swap_cluster_max.
>
> OTOH, the patch title suggests that we break out if ALL shrink_zones()
> calls in the priority loop have reclaimed that much.  I.e.
> accumulating the reclaimed number over the prio loop, not just over
> one zones iteration.
>
> From the patch description I couldn't really make sure what the
> intended behaviour was.
>
> So, should the sc.nr_reclaimed be reset before the prio loop or in
> each iteration of the prio loop?
>
> Either this patch is wrong or the comment above .nr_reclaimed is.
>
> And why didn't this have any observable effects?  Do I miss something

Nice catch!!
I think that's because situation Rik said is unusual.

> really obvious here?


> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1618,6 +1618,7 @@ static unsigned long do_try_to_free_page
>                }
>        }
>
> +       sc->nr_reclaimed = 0;
>        for (priority = DEF_PRIORITY; priority >= 0; priority--) {
>                sc->nr_scanned = 0;
>                if (!priority)
>
> --

I have a one comment.

If you directly initialize nr_reclaimed in do_try_to_free_pages function,
it might be a side effect.
Because old functions use scan_control declaration and initialization
method for initializing scan_control before calling
do_try_to_free_pages.

In future, If some function call do_try_to_free_pages after
scan_control declaration and initialization of nr_reclaimed, your
patch implementation reset nr_reclaimed to zero forcely, again.

but I think it is unlikely that it initializes nr_reclaimed with not zero. :(

But, like old functions, way to declaration and initialization is good
for readability and portability, I think.

Make sure below code is mangled and word-wrapped.
It just is example.

---
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9a27c44..18406ee 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1699,6 +1699,7 @@ unsigned long try_to_free_pages(struct zonelist
*zonelist, int order,
                .order = order,
                .mem_cgroup = NULL,
                .isolate_pages = isolate_pages_global,
+               .nr_reclaimed = 0,
        };

        return do_try_to_free_pages(zonelist, &sc);
@@ -1719,6 +1720,7 @@ unsigned long
try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,
                .order = 0,
                .mem_cgroup = mem_cont,
                .isolate_pages = mem_cgroup_isolate_pages,
+               .nr_reclaimed = 0;
        };
        struct zonelist *zonelist;



-- 
Kinds regards,
MinChan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
  2009-02-10 10:47   ` MinChan Kim
@ 2009-02-10 11:43     ` KOSAKI Motohiro
  -1 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 11:43 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

> ---
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9a27c44..18406ee 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1699,6 +1699,7 @@ unsigned long try_to_free_pages(struct zonelist
> *zonelist, int order,
>                 .order = order,
>                 .mem_cgroup = NULL,
>                 .isolate_pages = isolate_pages_global,
> +               .nr_reclaimed = 0,
>         };
> 
>         return do_try_to_free_pages(zonelist, &sc);
> @@ -1719,6 +1720,7 @@ unsigned long
> try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,
>                 .order = 0,
>                 .mem_cgroup = mem_cont,
>                 .isolate_pages = mem_cgroup_isolate_pages,
> +               .nr_reclaimed = 0;
>         };
>         struct zonelist *zonelist;

I think this code is better.

and, I think we also need to 


static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
{
        /* Minimum pages needed in order to stay on node */
        const unsigned long nr_pages = 1 << order;
        struct task_struct *p = current;
        struct reclaim_state reclaim_state;
        int priority;
        struct scan_control sc = {
                .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
                .may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
                .swap_cluster_max = max_t(unsigned long, nr_pages,
                                        SWAP_CLUSTER_MAX),
                .gfp_mask = gfp_mask,
                .swappiness = vm_swappiness,
                .isolate_pages = isolate_pages_global,
+               .nr_reclaimed = 0;
        };





^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
@ 2009-02-10 11:43     ` KOSAKI Motohiro
  0 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 11:43 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

> ---
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9a27c44..18406ee 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1699,6 +1699,7 @@ unsigned long try_to_free_pages(struct zonelist
> *zonelist, int order,
>                 .order = order,
>                 .mem_cgroup = NULL,
>                 .isolate_pages = isolate_pages_global,
> +               .nr_reclaimed = 0,
>         };
> 
>         return do_try_to_free_pages(zonelist, &sc);
> @@ -1719,6 +1720,7 @@ unsigned long
> try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,
>                 .order = 0,
>                 .mem_cgroup = mem_cont,
>                 .isolate_pages = mem_cgroup_isolate_pages,
> +               .nr_reclaimed = 0;
>         };
>         struct zonelist *zonelist;

I think this code is better.

and, I think we also need to 


static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
{
        /* Minimum pages needed in order to stay on node */
        const unsigned long nr_pages = 1 << order;
        struct task_struct *p = current;
        struct reclaim_state reclaim_state;
        int priority;
        struct scan_control sc = {
                .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
                .may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
                .swap_cluster_max = max_t(unsigned long, nr_pages,
                                        SWAP_CLUSTER_MAX),
                .gfp_mask = gfp_mask,
                .swappiness = vm_swappiness,
                .isolate_pages = isolate_pages_global,
+               .nr_reclaimed = 0;
        };




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
  2009-02-10 11:43     ` KOSAKI Motohiro
@ 2009-02-10 12:03       ` MinChan Kim
  -1 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-10 12:03 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Johannes Weiner, Rik van Riel, William Lee Irwin III,
	Andrew Morton, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 8:43 PM, KOSAKI Motohiro
<kosaki.motohiro@jp.fujitsu.com> wrote:
>> ---
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 9a27c44..18406ee 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1699,6 +1699,7 @@ unsigned long try_to_free_pages(struct zonelist
>> *zonelist, int order,
>>                 .order = order,
>>                 .mem_cgroup = NULL,
>>                 .isolate_pages = isolate_pages_global,
>> +               .nr_reclaimed = 0,
>>         };
>>
>>         return do_try_to_free_pages(zonelist, &sc);
>> @@ -1719,6 +1720,7 @@ unsigned long
>> try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,
>>                 .order = 0,
>>                 .mem_cgroup = mem_cont,
>>                 .isolate_pages = mem_cgroup_isolate_pages,
>> +               .nr_reclaimed = 0;
>>         };
>>         struct zonelist *zonelist;
>
> I think this code is better.
>
> and, I think we also need to
>
>
> static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> {
>        /* Minimum pages needed in order to stay on node */
>        const unsigned long nr_pages = 1 << order;
>        struct task_struct *p = current;
>        struct reclaim_state reclaim_state;
>        int priority;
>        struct scan_control sc = {
>                .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
>                .may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
>                .swap_cluster_max = max_t(unsigned long, nr_pages,
>                                        SWAP_CLUSTER_MAX),
>                .gfp_mask = gfp_mask,
>                .swappiness = vm_swappiness,
>                .isolate_pages = isolate_pages_global,
> +               .nr_reclaimed = 0;
>        };
>
>
>
>
>

Hmm.. I missed that.  Thanks.
There is one in shrink_all_memory.


-- 
Kinds regards,
MinChan Kim

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
@ 2009-02-10 12:03       ` MinChan Kim
  0 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-10 12:03 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Johannes Weiner, Rik van Riel, William Lee Irwin III,
	Andrew Morton, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 8:43 PM, KOSAKI Motohiro
<kosaki.motohiro@jp.fujitsu.com> wrote:
>> ---
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 9a27c44..18406ee 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1699,6 +1699,7 @@ unsigned long try_to_free_pages(struct zonelist
>> *zonelist, int order,
>>                 .order = order,
>>                 .mem_cgroup = NULL,
>>                 .isolate_pages = isolate_pages_global,
>> +               .nr_reclaimed = 0,
>>         };
>>
>>         return do_try_to_free_pages(zonelist, &sc);
>> @@ -1719,6 +1720,7 @@ unsigned long
>> try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,
>>                 .order = 0,
>>                 .mem_cgroup = mem_cont,
>>                 .isolate_pages = mem_cgroup_isolate_pages,
>> +               .nr_reclaimed = 0;
>>         };
>>         struct zonelist *zonelist;
>
> I think this code is better.
>
> and, I think we also need to
>
>
> static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> {
>        /* Minimum pages needed in order to stay on node */
>        const unsigned long nr_pages = 1 << order;
>        struct task_struct *p = current;
>        struct reclaim_state reclaim_state;
>        int priority;
>        struct scan_control sc = {
>                .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
>                .may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
>                .swap_cluster_max = max_t(unsigned long, nr_pages,
>                                        SWAP_CLUSTER_MAX),
>                .gfp_mask = gfp_mask,
>                .swappiness = vm_swappiness,
>                .isolate_pages = isolate_pages_global,
> +               .nr_reclaimed = 0;
>        };
>
>
>
>
>

Hmm.. I missed that.  Thanks.
There is one in shrink_all_memory.


-- 
Kinds regards,
MinChan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
  2009-02-10 12:03       ` MinChan Kim
@ 2009-02-10 12:06         ` KOSAKI Motohiro
  -1 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 12:06 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

> > static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> > {
> >        /* Minimum pages needed in order to stay on node */
> >        const unsigned long nr_pages = 1 << order;
> >        struct task_struct *p = current;
> >        struct reclaim_state reclaim_state;
> >        int priority;
> >        struct scan_control sc = {
> >                .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> >                .may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> >                .swap_cluster_max = max_t(unsigned long, nr_pages,
> >                                        SWAP_CLUSTER_MAX),
> >                .gfp_mask = gfp_mask,
> >                .swappiness = vm_swappiness,
> >                .isolate_pages = isolate_pages_global,
> > +               .nr_reclaimed = 0;
> >        };
> 
> Hmm.. I missed that.  Thanks.
> There is one in shrink_all_memory.

No.
__zone_reclaim isn't a part of shrink_all_memory().

Currently, shrink_all_memory() don't use sc.nr_reclaimed member.
(maybe, it's another wrong thing ;)




^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
@ 2009-02-10 12:06         ` KOSAKI Motohiro
  0 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 12:06 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

> > static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> > {
> >        /* Minimum pages needed in order to stay on node */
> >        const unsigned long nr_pages = 1 << order;
> >        struct task_struct *p = current;
> >        struct reclaim_state reclaim_state;
> >        int priority;
> >        struct scan_control sc = {
> >                .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> >                .may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> >                .swap_cluster_max = max_t(unsigned long, nr_pages,
> >                                        SWAP_CLUSTER_MAX),
> >                .gfp_mask = gfp_mask,
> >                .swappiness = vm_swappiness,
> >                .isolate_pages = isolate_pages_global,
> > +               .nr_reclaimed = 0;
> >        };
> 
> Hmm.. I missed that.  Thanks.
> There is one in shrink_all_memory.

No.
__zone_reclaim isn't a part of shrink_all_memory().

Currently, shrink_all_memory() don't use sc.nr_reclaimed member.
(maybe, it's another wrong thing ;)



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
  2009-02-10 12:06         ` KOSAKI Motohiro
@ 2009-02-10 12:31           ` MinChan Kim
  -1 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-10 12:31 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Johannes Weiner, Rik van Riel, William Lee Irwin III,
	Andrew Morton, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 9:06 PM, KOSAKI Motohiro
<kosaki.motohiro@jp.fujitsu.com> wrote:
>> > static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
>> > {
>> >        /* Minimum pages needed in order to stay on node */
>> >        const unsigned long nr_pages = 1 << order;
>> >        struct task_struct *p = current;
>> >        struct reclaim_state reclaim_state;
>> >        int priority;
>> >        struct scan_control sc = {
>> >                .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
>> >                .may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
>> >                .swap_cluster_max = max_t(unsigned long, nr_pages,
>> >                                        SWAP_CLUSTER_MAX),
>> >                .gfp_mask = gfp_mask,
>> >                .swappiness = vm_swappiness,
>> >                .isolate_pages = isolate_pages_global,
>> > +               .nr_reclaimed = 0;
>> >        };
>>
>> Hmm.. I missed that.  Thanks.
>> There is one in shrink_all_memory.
>
> No.
> __zone_reclaim isn't a part of shrink_all_memory().
> Currently, shrink_all_memory() don't use sc.nr_reclaimed member.
> (maybe, it's another wrong thing ;)

Hmm.. You're right.
As Johannes pointed out,
too many page shrinking can degrade resume performance.

We need to bale out in shrink_all_memory.
Other people, thought ?

-- 
Kinds regards,
MinChan Kim

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
@ 2009-02-10 12:31           ` MinChan Kim
  0 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-10 12:31 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Johannes Weiner, Rik van Riel, William Lee Irwin III,
	Andrew Morton, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 9:06 PM, KOSAKI Motohiro
<kosaki.motohiro@jp.fujitsu.com> wrote:
>> > static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
>> > {
>> >        /* Minimum pages needed in order to stay on node */
>> >        const unsigned long nr_pages = 1 << order;
>> >        struct task_struct *p = current;
>> >        struct reclaim_state reclaim_state;
>> >        int priority;
>> >        struct scan_control sc = {
>> >                .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
>> >                .may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
>> >                .swap_cluster_max = max_t(unsigned long, nr_pages,
>> >                                        SWAP_CLUSTER_MAX),
>> >                .gfp_mask = gfp_mask,
>> >                .swappiness = vm_swappiness,
>> >                .isolate_pages = isolate_pages_global,
>> > +               .nr_reclaimed = 0;
>> >        };
>>
>> Hmm.. I missed that.  Thanks.
>> There is one in shrink_all_memory.
>
> No.
> __zone_reclaim isn't a part of shrink_all_memory().
> Currently, shrink_all_memory() don't use sc.nr_reclaimed member.
> (maybe, it's another wrong thing ;)

Hmm.. You're right.
As Johannes pointed out,
too many page shrinking can degrade resume performance.

We need to bale out in shrink_all_memory.
Other people, thought ?

-- 
Kinds regards,
MinChan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
  2009-02-10 12:31           ` MinChan Kim
@ 2009-02-10 12:35             ` KOSAKI Motohiro
  -1 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 12:35 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

> Hmm.. You're right.
> As Johannes pointed out,
> too many page shrinking can degrade resume performance.
> 
> We need to bale out in shrink_all_memory.
> Other people, thought ?

shrink_all_zones() already have bale-out code ;)



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
@ 2009-02-10 12:35             ` KOSAKI Motohiro
  0 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 12:35 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

> Hmm.. You're right.
> As Johannes pointed out,
> too many page shrinking can degrade resume performance.
> 
> We need to bale out in shrink_all_memory.
> Other people, thought ?

shrink_all_zones() already have bale-out code ;)


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
  2009-02-10 12:35             ` KOSAKI Motohiro
@ 2009-02-10 12:40               ` MinChan Kim
  -1 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-10 12:40 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Johannes Weiner, Rik van Riel, William Lee Irwin III,
	Andrew Morton, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 9:35 PM, KOSAKI Motohiro
<kosaki.motohiro@jp.fujitsu.com> wrote:
>> Hmm.. You're right.
>> As Johannes pointed out,
>> too many page shrinking can degrade resume performance.
>>
>> We need to bale out in shrink_all_memory.
>> Other people, thought ?
>
> shrink_all_zones() already have bale-out code ;)
>

Ahh.. I need to sleep. :(
Thanks. Kosaki-san.



-- 
Kinds regards,
MinChan Kim

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
@ 2009-02-10 12:40               ` MinChan Kim
  0 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-10 12:40 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Johannes Weiner, Rik van Riel, William Lee Irwin III,
	Andrew Morton, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 9:35 PM, KOSAKI Motohiro
<kosaki.motohiro@jp.fujitsu.com> wrote:
>> Hmm.. You're right.
>> As Johannes pointed out,
>> too many page shrinking can degrade resume performance.
>>
>> We need to bale out in shrink_all_memory.
>> Other people, thought ?
>
> shrink_all_zones() already have bale-out code ;)
>

Ahh.. I need to sleep. :(
Thanks. Kosaki-san.



-- 
Kinds regards,
MinChan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH] vmscan: initialize sc->nr_reclaimed properly take2
  2009-02-10 12:40               ` MinChan Kim
@ 2009-02-10 12:58                 ` KOSAKI Motohiro
  -1 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 12:58 UTC (permalink / raw)
  To: MinChan Kim, Johannes Weiner
  Cc: kosaki.motohiro, Rik van Riel, William Lee Irwin III,
	Andrew Morton, linux-kernel, linux-mm


How about this?

===
From: Johannes Weiner <hannes@cmpxchg.org>
Subject: [PATCH] vmscan: initialize sc->nr_reclaimed properly

Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
counter into the scan control to accumulate the number of all
reclaimed pages in one direct reclaim invocation.

The commit missed to actually adjust try_to_free_pages() and __zone_reclaim()
which now does not initialize sc.nr_reclaimed and makes shrink_zone() make
assumptions on whether to bail out of the reclaim cycle based on an
uninitialized value.

Fix it up. 

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: MinChan Kim <minchan.kim@gmail.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
 mm/vmscan.c |    3 +++
 1 file changed, 3 insertions(+)

Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1665,6 +1665,7 @@ unsigned long try_to_free_pages(struct z
 								gfp_t gfp_mask)
 {
 	struct scan_control sc = {
+		.nr_reclaimed = 0,
 		.gfp_mask = gfp_mask,
 		.may_writepage = !laptop_mode,
 		.swap_cluster_max = SWAP_CLUSTER_MAX,
@@ -1686,6 +1687,7 @@ unsigned long try_to_free_mem_cgroup_pag
 					   unsigned int swappiness)
 {
 	struct scan_control sc = {
+		.nr_reclaimed = 0,
 		.may_writepage = !laptop_mode,
 		.may_swap = 1,
 		.swap_cluster_max = SWAP_CLUSTER_MAX,
@@ -2245,6 +2247,7 @@ static int __zone_reclaim(struct zone *z
 	struct reclaim_state reclaim_state;
 	int priority;
 	struct scan_control sc = {
+		.nr_reclaimed = 0,
 		.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
 		.may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
 		.swap_cluster_max = max_t(unsigned long, nr_pages,



^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH] vmscan: initialize sc->nr_reclaimed properly take2
@ 2009-02-10 12:58                 ` KOSAKI Motohiro
  0 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 12:58 UTC (permalink / raw)
  To: MinChan Kim, Johannes Weiner
  Cc: kosaki.motohiro, Rik van Riel, William Lee Irwin III,
	Andrew Morton, linux-kernel, linux-mm


How about this?

===
From: Johannes Weiner <hannes@cmpxchg.org>
Subject: [PATCH] vmscan: initialize sc->nr_reclaimed properly

Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
counter into the scan control to accumulate the number of all
reclaimed pages in one direct reclaim invocation.

The commit missed to actually adjust try_to_free_pages() and __zone_reclaim()
which now does not initialize sc.nr_reclaimed and makes shrink_zone() make
assumptions on whether to bail out of the reclaim cycle based on an
uninitialized value.

Fix it up. 

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: MinChan Kim <minchan.kim@gmail.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
 mm/vmscan.c |    3 +++
 1 file changed, 3 insertions(+)

Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1665,6 +1665,7 @@ unsigned long try_to_free_pages(struct z
 								gfp_t gfp_mask)
 {
 	struct scan_control sc = {
+		.nr_reclaimed = 0,
 		.gfp_mask = gfp_mask,
 		.may_writepage = !laptop_mode,
 		.swap_cluster_max = SWAP_CLUSTER_MAX,
@@ -1686,6 +1687,7 @@ unsigned long try_to_free_mem_cgroup_pag
 					   unsigned int swappiness)
 {
 	struct scan_control sc = {
+		.nr_reclaimed = 0,
 		.may_writepage = !laptop_mode,
 		.may_swap = 1,
 		.swap_cluster_max = SWAP_CLUSTER_MAX,
@@ -2245,6 +2247,7 @@ static int __zone_reclaim(struct zone *z
 	struct reclaim_state reclaim_state;
 	int priority;
 	struct scan_control sc = {
+		.nr_reclaimed = 0,
 		.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
 		.may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
 		.swap_cluster_max = max_t(unsigned long, nr_pages,


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH] shrink_all_memory() use sc.nr_reclaimed
  2009-02-10 12:58                 ` KOSAKI Motohiro
@ 2009-02-10 13:00                   ` KOSAKI Motohiro
  -1 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 13:00 UTC (permalink / raw)
  To: MinChan Kim, Johannes Weiner, Rik van Riel, Rafael J. Wysocki
  Cc: kosaki.motohiro, William Lee Irwin III, Andrew Morton,
	linux-kernel, linux-mm

Impact: cleanup

Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
counter into the scan control to accumulate the number of all
reclaimed pages in a reclaim invocation.

shrink_all_memory() can use the same mechanism. it increase code 
consistency.


Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: MinChan Kim <minchan.kim@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Rik van Riel <riel@redhat.com>
---
 mm/vmscan.c |   49 ++++++++++++++++++++++++-------------------------
 1 file changed, 24 insertions(+), 25 deletions(-)

Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2004,16 +2004,15 @@ unsigned long global_lru_pages(void)
 #ifdef CONFIG_PM
 /*
  * Helper function for shrink_all_memory().  Tries to reclaim 'nr_pages' pages
- * from LRU lists system-wide, for given pass and priority, and returns the
- * number of reclaimed pages
+ * from LRU lists system-wide, for given pass and priority.
  *
  * For pass > 3 we also try to shrink the LRU lists that contain a few pages
  */
-static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
+static void shrink_all_zones(unsigned long nr_pages, int prio,
 				      int pass, struct scan_control *sc)
 {
 	struct zone *zone;
-	unsigned long nr_to_scan, ret = 0;
+	unsigned long nr_to_scan;
 	enum lru_list l;
 
 	for_each_zone(zone) {
@@ -2038,15 +2037,13 @@ static unsigned long shrink_all_zones(un
 				nr_to_scan = min(nr_pages,
 					zone_page_state(zone,
 							NR_LRU_BASE + l));
-				ret += shrink_list(l, nr_to_scan, zone,
-								sc, prio);
-				if (ret >= nr_pages)
-					return ret;
+				sc->nr_reclaimed += shrink_list(l, nr_to_scan,
+								zone, sc, prio);
+				if (sc->nr_reclaimed >= nr_pages)
+					return;
 			}
 		}
 	}
-
-	return ret;
 }
 
 /*
@@ -2060,10 +2057,10 @@ static unsigned long shrink_all_zones(un
 unsigned long shrink_all_memory(unsigned long nr_pages)
 {
 	unsigned long lru_pages, nr_slab;
-	unsigned long ret = 0;
 	int pass;
 	struct reclaim_state reclaim_state;
 	struct scan_control sc = {
+		.nr_reclaimed = 0,
 		.gfp_mask = GFP_KERNEL,
 		.may_swap = 0,
 		.swap_cluster_max = nr_pages,
@@ -2083,8 +2080,8 @@ unsigned long shrink_all_memory(unsigned
 		if (!reclaim_state.reclaimed_slab)
 			break;
 
-		ret += reclaim_state.reclaimed_slab;
-		if (ret >= nr_pages)
+		sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+		if (sc.nr_reclaimed >= nr_pages)
 			goto out;
 
 		nr_slab -= reclaim_state.reclaimed_slab;
@@ -2108,18 +2105,18 @@ unsigned long shrink_all_memory(unsigned
 		}
 
 		for (prio = DEF_PRIORITY; prio >= 0; prio--) {
-			unsigned long nr_to_scan = nr_pages - ret;
+			unsigned long nr_to_scan = nr_pages - sc.nr_reclaimed;
 
 			sc.nr_scanned = 0;
-			ret += shrink_all_zones(nr_to_scan, prio, pass, &sc);
-			if (ret >= nr_pages)
+			shrink_all_zones(nr_to_scan, prio, pass, &sc);
+			if (sc.nr_reclaimed >= nr_pages)
 				goto out;
 
 			reclaim_state.reclaimed_slab = 0;
 			shrink_slab(sc.nr_scanned, sc.gfp_mask,
 					global_lru_pages());
-			ret += reclaim_state.reclaimed_slab;
-			if (ret >= nr_pages)
+			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+			if (sc.nr_reclaimed >= nr_pages)
 				goto out;
 
 			if (sc.nr_scanned && prio < DEF_PRIORITY - 2)
@@ -2128,21 +2125,23 @@ unsigned long shrink_all_memory(unsigned
 	}
 
 	/*
-	 * If ret = 0, we could not shrink LRUs, but there may be something
-	 * in slab caches
+	 * If sc.nr_reclaimed = 0, we could not shrink LRUs, but there may be
+	 * something in slab caches
 	 */
-	if (!ret) {
+	if (!sc.nr_reclaimed) {
 		do {
 			reclaim_state.reclaimed_slab = 0;
-			shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
-			ret += reclaim_state.reclaimed_slab;
-		} while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
+			shrink_slab(nr_pages, sc.gfp_mask,
+				    global_lru_pages());
+			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+		} while (sc.nr_reclaimed < nr_pages &&
+			 reclaim_state.reclaimed_slab > 0);
 	}
 
 out:
 	current->reclaim_state = NULL;
 
-	return ret;
+	return sc.nr_reclaimed;
 }
 #endif
 



^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH] shrink_all_memory() use sc.nr_reclaimed
@ 2009-02-10 13:00                   ` KOSAKI Motohiro
  0 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 13:00 UTC (permalink / raw)
  To: MinChan Kim, Johannes Weiner, Rik van Riel, Rafael J. Wysocki
  Cc: kosaki.motohiro, William Lee Irwin III, Andrew Morton,
	linux-kernel, linux-mm

Impact: cleanup

Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
counter into the scan control to accumulate the number of all
reclaimed pages in a reclaim invocation.

shrink_all_memory() can use the same mechanism. it increase code 
consistency.


Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: MinChan Kim <minchan.kim@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Rik van Riel <riel@redhat.com>
---
 mm/vmscan.c |   49 ++++++++++++++++++++++++-------------------------
 1 file changed, 24 insertions(+), 25 deletions(-)

Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2004,16 +2004,15 @@ unsigned long global_lru_pages(void)
 #ifdef CONFIG_PM
 /*
  * Helper function for shrink_all_memory().  Tries to reclaim 'nr_pages' pages
- * from LRU lists system-wide, for given pass and priority, and returns the
- * number of reclaimed pages
+ * from LRU lists system-wide, for given pass and priority.
  *
  * For pass > 3 we also try to shrink the LRU lists that contain a few pages
  */
-static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
+static void shrink_all_zones(unsigned long nr_pages, int prio,
 				      int pass, struct scan_control *sc)
 {
 	struct zone *zone;
-	unsigned long nr_to_scan, ret = 0;
+	unsigned long nr_to_scan;
 	enum lru_list l;
 
 	for_each_zone(zone) {
@@ -2038,15 +2037,13 @@ static unsigned long shrink_all_zones(un
 				nr_to_scan = min(nr_pages,
 					zone_page_state(zone,
 							NR_LRU_BASE + l));
-				ret += shrink_list(l, nr_to_scan, zone,
-								sc, prio);
-				if (ret >= nr_pages)
-					return ret;
+				sc->nr_reclaimed += shrink_list(l, nr_to_scan,
+								zone, sc, prio);
+				if (sc->nr_reclaimed >= nr_pages)
+					return;
 			}
 		}
 	}
-
-	return ret;
 }
 
 /*
@@ -2060,10 +2057,10 @@ static unsigned long shrink_all_zones(un
 unsigned long shrink_all_memory(unsigned long nr_pages)
 {
 	unsigned long lru_pages, nr_slab;
-	unsigned long ret = 0;
 	int pass;
 	struct reclaim_state reclaim_state;
 	struct scan_control sc = {
+		.nr_reclaimed = 0,
 		.gfp_mask = GFP_KERNEL,
 		.may_swap = 0,
 		.swap_cluster_max = nr_pages,
@@ -2083,8 +2080,8 @@ unsigned long shrink_all_memory(unsigned
 		if (!reclaim_state.reclaimed_slab)
 			break;
 
-		ret += reclaim_state.reclaimed_slab;
-		if (ret >= nr_pages)
+		sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+		if (sc.nr_reclaimed >= nr_pages)
 			goto out;
 
 		nr_slab -= reclaim_state.reclaimed_slab;
@@ -2108,18 +2105,18 @@ unsigned long shrink_all_memory(unsigned
 		}
 
 		for (prio = DEF_PRIORITY; prio >= 0; prio--) {
-			unsigned long nr_to_scan = nr_pages - ret;
+			unsigned long nr_to_scan = nr_pages - sc.nr_reclaimed;
 
 			sc.nr_scanned = 0;
-			ret += shrink_all_zones(nr_to_scan, prio, pass, &sc);
-			if (ret >= nr_pages)
+			shrink_all_zones(nr_to_scan, prio, pass, &sc);
+			if (sc.nr_reclaimed >= nr_pages)
 				goto out;
 
 			reclaim_state.reclaimed_slab = 0;
 			shrink_slab(sc.nr_scanned, sc.gfp_mask,
 					global_lru_pages());
-			ret += reclaim_state.reclaimed_slab;
-			if (ret >= nr_pages)
+			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+			if (sc.nr_reclaimed >= nr_pages)
 				goto out;
 
 			if (sc.nr_scanned && prio < DEF_PRIORITY - 2)
@@ -2128,21 +2125,23 @@ unsigned long shrink_all_memory(unsigned
 	}
 
 	/*
-	 * If ret = 0, we could not shrink LRUs, but there may be something
-	 * in slab caches
+	 * If sc.nr_reclaimed = 0, we could not shrink LRUs, but there may be
+	 * something in slab caches
 	 */
-	if (!ret) {
+	if (!sc.nr_reclaimed) {
 		do {
 			reclaim_state.reclaimed_slab = 0;
-			shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
-			ret += reclaim_state.reclaimed_slab;
-		} while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
+			shrink_slab(nr_pages, sc.gfp_mask,
+				    global_lru_pages());
+			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+		} while (sc.nr_reclaimed < nr_pages &&
+			 reclaim_state.reclaimed_slab > 0);
 	}
 
 out:
 	current->reclaim_state = NULL;
 
-	return ret;
+	return sc.nr_reclaimed;
 }
 #endif
 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] vmscan: initialize sc->nr_reclaimed properly take2
  2009-02-10 12:58                 ` KOSAKI Motohiro
@ 2009-02-10 16:09                   ` Johannes Weiner
  -1 siblings, 0 replies; 44+ messages in thread
From: Johannes Weiner @ 2009-02-10 16:09 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: MinChan Kim, Rik van Riel, William Lee Irwin III, Andrew Morton,
	linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 09:58:04PM +0900, KOSAKI Motohiro wrote:
> 
> How about this?

I agree, this is the better solution.

Thank you, Kosaki-san.

	Hannes

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] vmscan: initialize sc->nr_reclaimed properly take2
@ 2009-02-10 16:09                   ` Johannes Weiner
  0 siblings, 0 replies; 44+ messages in thread
From: Johannes Weiner @ 2009-02-10 16:09 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: MinChan Kim, Rik van Riel, William Lee Irwin III, Andrew Morton,
	linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 09:58:04PM +0900, KOSAKI Motohiro wrote:
> 
> How about this?

I agree, this is the better solution.

Thank you, Kosaki-san.

	Hannes

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
  2009-02-10 13:00                   ` KOSAKI Motohiro
@ 2009-02-10 16:20                     ` Johannes Weiner
  -1 siblings, 0 replies; 44+ messages in thread
From: Johannes Weiner @ 2009-02-10 16:20 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: MinChan Kim, Rik van Riel, Rafael J. Wysocki,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 10:00:26PM +0900, KOSAKI Motohiro wrote:
> Impact: cleanup
> 
> Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
> direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
> counter into the scan control to accumulate the number of all
> reclaimed pages in a reclaim invocation.
> 
> shrink_all_memory() can use the same mechanism. it increase code 
> consistency.
> 
> 
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> Cc: MinChan Kim <minchan.kim@gmail.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
> Cc: Rik van Riel <riel@redhat.com>
> ---
>  mm/vmscan.c |   49 ++++++++++++++++++++++++-------------------------
>  1 file changed, 24 insertions(+), 25 deletions(-)
> 
> Index: b/mm/vmscan.c
> ===================================================================
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2004,16 +2004,15 @@ unsigned long global_lru_pages(void)
>  #ifdef CONFIG_PM
>  /*
>   * Helper function for shrink_all_memory().  Tries to reclaim 'nr_pages' pages
> - * from LRU lists system-wide, for given pass and priority, and returns the
> - * number of reclaimed pages
> + * from LRU lists system-wide, for given pass and priority.
>   *
>   * For pass > 3 we also try to shrink the LRU lists that contain a few pages
>   */
> -static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
> +static void shrink_all_zones(unsigned long nr_pages, int prio,
>  				      int pass, struct scan_control *sc)
>  {
>  	struct zone *zone;
> -	unsigned long nr_to_scan, ret = 0;
> +	unsigned long nr_to_scan;
>  	enum lru_list l;

Basing it on swsusp-clean-up-shrink_all_zones.patch probably makes it
easier for Andrew to pick it up.

>  	for_each_zone(zone) {
> @@ -2038,15 +2037,13 @@ static unsigned long shrink_all_zones(un
>  				nr_to_scan = min(nr_pages,
>  					zone_page_state(zone,
>  							NR_LRU_BASE + l));
> -				ret += shrink_list(l, nr_to_scan, zone,
> -								sc, prio);
> -				if (ret >= nr_pages)
> -					return ret;
> +				sc->nr_reclaimed += shrink_list(l, nr_to_scan,
> +								zone, sc, prio);
> +				if (sc->nr_reclaimed >= nr_pages)
> +					return;
>  			}
>  		}
>  	}
> -
> -	return ret;
>  }
>  
>  /*
> @@ -2060,10 +2057,10 @@ static unsigned long shrink_all_zones(un
>  unsigned long shrink_all_memory(unsigned long nr_pages)
>  {
>  	unsigned long lru_pages, nr_slab;
> -	unsigned long ret = 0;
>  	int pass;
>  	struct reclaim_state reclaim_state;
>  	struct scan_control sc = {
> +		.nr_reclaimed = 0,
>  		.gfp_mask = GFP_KERNEL,
>  		.may_swap = 0,
>  		.swap_cluster_max = nr_pages,
> @@ -2083,8 +2080,8 @@ unsigned long shrink_all_memory(unsigned
>  		if (!reclaim_state.reclaimed_slab)
>  			break;
>  
> -		ret += reclaim_state.reclaimed_slab;
> -		if (ret >= nr_pages)
> +		sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> +		if (sc.nr_reclaimed >= nr_pages)
>  			goto out;
>  
>  		nr_slab -= reclaim_state.reclaimed_slab;
> @@ -2108,18 +2105,18 @@ unsigned long shrink_all_memory(unsigned
>  		}
>  
>  		for (prio = DEF_PRIORITY; prio >= 0; prio--) {
> -			unsigned long nr_to_scan = nr_pages - ret;
> +			unsigned long nr_to_scan = nr_pages - sc.nr_reclaimed;
>  
>  			sc.nr_scanned = 0;
> -			ret += shrink_all_zones(nr_to_scan, prio, pass, &sc);
> -			if (ret >= nr_pages)
> +			shrink_all_zones(nr_to_scan, prio, pass, &sc);
> +			if (sc.nr_reclaimed >= nr_pages)
>  				goto out;
>  
>  			reclaim_state.reclaimed_slab = 0;
>  			shrink_slab(sc.nr_scanned, sc.gfp_mask,
>  					global_lru_pages());
> -			ret += reclaim_state.reclaimed_slab;
> -			if (ret >= nr_pages)
> +			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> +			if (sc.nr_reclaimed >= nr_pages)
>  				goto out;
>  
>  			if (sc.nr_scanned && prio < DEF_PRIORITY - 2)
> @@ -2128,21 +2125,23 @@ unsigned long shrink_all_memory(unsigned
>  	}
>  
>  	/*
> -	 * If ret = 0, we could not shrink LRUs, but there may be something
> -	 * in slab caches
> +	 * If sc.nr_reclaimed = 0, we could not shrink LRUs, but there may be
> +	 * something in slab caches
>  	 */
> -	if (!ret) {
> +	if (!sc.nr_reclaimed) {
>  		do {
>  			reclaim_state.reclaimed_slab = 0;
> -			shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
> -			ret += reclaim_state.reclaimed_slab;
> -		} while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
> +			shrink_slab(nr_pages, sc.gfp_mask,
> +				    global_lru_pages());
> +			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> +		} while (sc.nr_reclaimed < nr_pages &&
> +			 reclaim_state.reclaimed_slab > 0);

:(

Is this really an improvement?  `ret' is better to read than
`sc.nr_reclaimed'.

	Hannes

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
@ 2009-02-10 16:20                     ` Johannes Weiner
  0 siblings, 0 replies; 44+ messages in thread
From: Johannes Weiner @ 2009-02-10 16:20 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: MinChan Kim, Rik van Riel, Rafael J. Wysocki,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 10:00:26PM +0900, KOSAKI Motohiro wrote:
> Impact: cleanup
> 
> Commit a79311c14eae4bb946a97af25f3e1b17d625985d "vmscan: bail out of
> direct reclaim after swap_cluster_max pages" moved the nr_reclaimed
> counter into the scan control to accumulate the number of all
> reclaimed pages in a reclaim invocation.
> 
> shrink_all_memory() can use the same mechanism. it increase code 
> consistency.
> 
> 
> Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> Cc: MinChan Kim <minchan.kim@gmail.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
> Cc: Rik van Riel <riel@redhat.com>
> ---
>  mm/vmscan.c |   49 ++++++++++++++++++++++++-------------------------
>  1 file changed, 24 insertions(+), 25 deletions(-)
> 
> Index: b/mm/vmscan.c
> ===================================================================
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2004,16 +2004,15 @@ unsigned long global_lru_pages(void)
>  #ifdef CONFIG_PM
>  /*
>   * Helper function for shrink_all_memory().  Tries to reclaim 'nr_pages' pages
> - * from LRU lists system-wide, for given pass and priority, and returns the
> - * number of reclaimed pages
> + * from LRU lists system-wide, for given pass and priority.
>   *
>   * For pass > 3 we also try to shrink the LRU lists that contain a few pages
>   */
> -static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
> +static void shrink_all_zones(unsigned long nr_pages, int prio,
>  				      int pass, struct scan_control *sc)
>  {
>  	struct zone *zone;
> -	unsigned long nr_to_scan, ret = 0;
> +	unsigned long nr_to_scan;
>  	enum lru_list l;

Basing it on swsusp-clean-up-shrink_all_zones.patch probably makes it
easier for Andrew to pick it up.

>  	for_each_zone(zone) {
> @@ -2038,15 +2037,13 @@ static unsigned long shrink_all_zones(un
>  				nr_to_scan = min(nr_pages,
>  					zone_page_state(zone,
>  							NR_LRU_BASE + l));
> -				ret += shrink_list(l, nr_to_scan, zone,
> -								sc, prio);
> -				if (ret >= nr_pages)
> -					return ret;
> +				sc->nr_reclaimed += shrink_list(l, nr_to_scan,
> +								zone, sc, prio);
> +				if (sc->nr_reclaimed >= nr_pages)
> +					return;
>  			}
>  		}
>  	}
> -
> -	return ret;
>  }
>  
>  /*
> @@ -2060,10 +2057,10 @@ static unsigned long shrink_all_zones(un
>  unsigned long shrink_all_memory(unsigned long nr_pages)
>  {
>  	unsigned long lru_pages, nr_slab;
> -	unsigned long ret = 0;
>  	int pass;
>  	struct reclaim_state reclaim_state;
>  	struct scan_control sc = {
> +		.nr_reclaimed = 0,
>  		.gfp_mask = GFP_KERNEL,
>  		.may_swap = 0,
>  		.swap_cluster_max = nr_pages,
> @@ -2083,8 +2080,8 @@ unsigned long shrink_all_memory(unsigned
>  		if (!reclaim_state.reclaimed_slab)
>  			break;
>  
> -		ret += reclaim_state.reclaimed_slab;
> -		if (ret >= nr_pages)
> +		sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> +		if (sc.nr_reclaimed >= nr_pages)
>  			goto out;
>  
>  		nr_slab -= reclaim_state.reclaimed_slab;
> @@ -2108,18 +2105,18 @@ unsigned long shrink_all_memory(unsigned
>  		}
>  
>  		for (prio = DEF_PRIORITY; prio >= 0; prio--) {
> -			unsigned long nr_to_scan = nr_pages - ret;
> +			unsigned long nr_to_scan = nr_pages - sc.nr_reclaimed;
>  
>  			sc.nr_scanned = 0;
> -			ret += shrink_all_zones(nr_to_scan, prio, pass, &sc);
> -			if (ret >= nr_pages)
> +			shrink_all_zones(nr_to_scan, prio, pass, &sc);
> +			if (sc.nr_reclaimed >= nr_pages)
>  				goto out;
>  
>  			reclaim_state.reclaimed_slab = 0;
>  			shrink_slab(sc.nr_scanned, sc.gfp_mask,
>  					global_lru_pages());
> -			ret += reclaim_state.reclaimed_slab;
> -			if (ret >= nr_pages)
> +			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> +			if (sc.nr_reclaimed >= nr_pages)
>  				goto out;
>  
>  			if (sc.nr_scanned && prio < DEF_PRIORITY - 2)
> @@ -2128,21 +2125,23 @@ unsigned long shrink_all_memory(unsigned
>  	}
>  
>  	/*
> -	 * If ret = 0, we could not shrink LRUs, but there may be something
> -	 * in slab caches
> +	 * If sc.nr_reclaimed = 0, we could not shrink LRUs, but there may be
> +	 * something in slab caches
>  	 */
> -	if (!ret) {
> +	if (!sc.nr_reclaimed) {
>  		do {
>  			reclaim_state.reclaimed_slab = 0;
> -			shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
> -			ret += reclaim_state.reclaimed_slab;
> -		} while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
> +			shrink_slab(nr_pages, sc.gfp_mask,
> +				    global_lru_pages());
> +			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> +		} while (sc.nr_reclaimed < nr_pages &&
> +			 reclaim_state.reclaimed_slab > 0);

:(

Is this really an improvement?  `ret' is better to read than
`sc.nr_reclaimed'.

	Hannes

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
  2009-02-10 16:20                     ` Johannes Weiner
@ 2009-02-10 20:41                       ` KOSAKI Motohiro
  -1 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 20:41 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: MinChan Kim, Rik van Riel, Rafael J. Wysocki,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

>>  {
>>       struct zone *zone;
>> -     unsigned long nr_to_scan, ret = 0;
>> +     unsigned long nr_to_scan;
>>       enum lru_list l;
>
> Basing it on swsusp-clean-up-shrink_all_zones.patch probably makes it
> easier for Andrew to pick it up.

ok, thanks.

>>                       reclaim_state.reclaimed_slab = 0;
>> -                     shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
>> -                     ret += reclaim_state.reclaimed_slab;
>> -             } while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
>> +                     shrink_slab(nr_pages, sc.gfp_mask,
>> +                                 global_lru_pages());
>> +                     sc.nr_reclaimed += reclaim_state.reclaimed_slab;
>> +             } while (sc.nr_reclaimed < nr_pages &&
>> +                      reclaim_state.reclaimed_slab > 0);
>
> :(
>
> Is this really an improvement?  `ret' is better to read than
> `sc.nr_reclaimed'.

I know it's debetable thing.
but I still think code consistency is important than variable name preference.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
@ 2009-02-10 20:41                       ` KOSAKI Motohiro
  0 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-10 20:41 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: MinChan Kim, Rik van Riel, Rafael J. Wysocki,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

>>  {
>>       struct zone *zone;
>> -     unsigned long nr_to_scan, ret = 0;
>> +     unsigned long nr_to_scan;
>>       enum lru_list l;
>
> Basing it on swsusp-clean-up-shrink_all_zones.patch probably makes it
> easier for Andrew to pick it up.

ok, thanks.

>>                       reclaim_state.reclaimed_slab = 0;
>> -                     shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
>> -                     ret += reclaim_state.reclaimed_slab;
>> -             } while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
>> +                     shrink_slab(nr_pages, sc.gfp_mask,
>> +                                 global_lru_pages());
>> +                     sc.nr_reclaimed += reclaim_state.reclaimed_slab;
>> +             } while (sc.nr_reclaimed < nr_pages &&
>> +                      reclaim_state.reclaimed_slab > 0);
>
> :(
>
> Is this really an improvement?  `ret' is better to read than
> `sc.nr_reclaimed'.

I know it's debetable thing.
but I still think code consistency is important than variable name preference.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] vmscan: initialize sc->nr_reclaimed properly take2
  2009-02-10 12:58                 ` KOSAKI Motohiro
@ 2009-02-10 22:06                   ` Andrew Morton
  -1 siblings, 0 replies; 44+ messages in thread
From: Andrew Morton @ 2009-02-10 22:06 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: minchan.kim, hannes, kosaki.motohiro, riel, wli, linux-kernel, linux-mm

On Tue, 10 Feb 2009 21:58:04 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:

> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1665,6 +1665,7 @@ unsigned long try_to_free_pages(struct z
>  								gfp_t gfp_mask)
>  {
>  	struct scan_control sc = {
> +		.nr_reclaimed = 0,
>  		.gfp_mask = gfp_mask,
>  		.may_writepage = !laptop_mode,
>  		.swap_cluster_max = SWAP_CLUSTER_MAX,
> @@ -1686,6 +1687,7 @@ unsigned long try_to_free_mem_cgroup_pag
>  					   unsigned int swappiness)
>  {
>  	struct scan_control sc = {
> +		.nr_reclaimed = 0,
>  		.may_writepage = !laptop_mode,
>  		.may_swap = 1,
>  		.swap_cluster_max = SWAP_CLUSTER_MAX,
> @@ -2245,6 +2247,7 @@ static int __zone_reclaim(struct zone *z
>  	struct reclaim_state reclaim_state;
>  	int priority;
>  	struct scan_control sc = {
> +		.nr_reclaimed = 0,
>  		.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
>  		.may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
>  		.swap_cluster_max = max_t(unsigned long, nr_pages,

Confused.  The compiler already initialises any unmentioned fields to zero,
so this patch has no effect.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] vmscan: initialize sc->nr_reclaimed properly take2
@ 2009-02-10 22:06                   ` Andrew Morton
  0 siblings, 0 replies; 44+ messages in thread
From: Andrew Morton @ 2009-02-10 22:06 UTC (permalink / raw)
  To: KOSAKI Motohiro; +Cc: minchan.kim, hannes, riel, wli, linux-kernel, linux-mm

On Tue, 10 Feb 2009 21:58:04 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:

> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1665,6 +1665,7 @@ unsigned long try_to_free_pages(struct z
>  								gfp_t gfp_mask)
>  {
>  	struct scan_control sc = {
> +		.nr_reclaimed = 0,
>  		.gfp_mask = gfp_mask,
>  		.may_writepage = !laptop_mode,
>  		.swap_cluster_max = SWAP_CLUSTER_MAX,
> @@ -1686,6 +1687,7 @@ unsigned long try_to_free_mem_cgroup_pag
>  					   unsigned int swappiness)
>  {
>  	struct scan_control sc = {
> +		.nr_reclaimed = 0,
>  		.may_writepage = !laptop_mode,
>  		.may_swap = 1,
>  		.swap_cluster_max = SWAP_CLUSTER_MAX,
> @@ -2245,6 +2247,7 @@ static int __zone_reclaim(struct zone *z
>  	struct reclaim_state reclaim_state;
>  	int priority;
>  	struct scan_control sc = {
> +		.nr_reclaimed = 0,
>  		.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
>  		.may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
>  		.swap_cluster_max = max_t(unsigned long, nr_pages,

Confused.  The compiler already initialises any unmentioned fields to zero,
so this patch has no effect.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] vmscan: initialize sc->nr_reclaimed properly take2
  2009-02-10 22:06                   ` Andrew Morton
@ 2009-02-10 22:15                     ` Johannes Weiner
  -1 siblings, 0 replies; 44+ messages in thread
From: Johannes Weiner @ 2009-02-10 22:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: KOSAKI Motohiro, minchan.kim, riel, wli, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 02:06:37PM -0800, Andrew Morton wrote:
> On Tue, 10 Feb 2009 21:58:04 +0900 (JST)
> KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:
> 
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1665,6 +1665,7 @@ unsigned long try_to_free_pages(struct z
> >  								gfp_t gfp_mask)
> >  {
> >  	struct scan_control sc = {
> > +		.nr_reclaimed = 0,
> >  		.gfp_mask = gfp_mask,
> >  		.may_writepage = !laptop_mode,
> >  		.swap_cluster_max = SWAP_CLUSTER_MAX,
> > @@ -1686,6 +1687,7 @@ unsigned long try_to_free_mem_cgroup_pag
> >  					   unsigned int swappiness)
> >  {
> >  	struct scan_control sc = {
> > +		.nr_reclaimed = 0,
> >  		.may_writepage = !laptop_mode,
> >  		.may_swap = 1,
> >  		.swap_cluster_max = SWAP_CLUSTER_MAX,
> > @@ -2245,6 +2247,7 @@ static int __zone_reclaim(struct zone *z
> >  	struct reclaim_state reclaim_state;
> >  	int priority;
> >  	struct scan_control sc = {
> > +		.nr_reclaimed = 0,
> >  		.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> >  		.may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> >  		.swap_cluster_max = max_t(unsigned long, nr_pages,
> 
> Confused.  The compiler already initialises any unmentioned fields to zero,
> so this patch has no effect.

Oh, nice, I was actually testing the wrong thing!

	struct foo foo;

wouldn't do that.  But

	struct foo foo = { .a = 5 };

actually would initialize foo.b = 0.

Sorry.  Please ignore this patch and the other one regarding the
explicit initialization of sc.order.  :(

	Hannes

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] vmscan: initialize sc->nr_reclaimed properly take2
@ 2009-02-10 22:15                     ` Johannes Weiner
  0 siblings, 0 replies; 44+ messages in thread
From: Johannes Weiner @ 2009-02-10 22:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: KOSAKI Motohiro, minchan.kim, riel, wli, linux-kernel, linux-mm

On Tue, Feb 10, 2009 at 02:06:37PM -0800, Andrew Morton wrote:
> On Tue, 10 Feb 2009 21:58:04 +0900 (JST)
> KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:
> 
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1665,6 +1665,7 @@ unsigned long try_to_free_pages(struct z
> >  								gfp_t gfp_mask)
> >  {
> >  	struct scan_control sc = {
> > +		.nr_reclaimed = 0,
> >  		.gfp_mask = gfp_mask,
> >  		.may_writepage = !laptop_mode,
> >  		.swap_cluster_max = SWAP_CLUSTER_MAX,
> > @@ -1686,6 +1687,7 @@ unsigned long try_to_free_mem_cgroup_pag
> >  					   unsigned int swappiness)
> >  {
> >  	struct scan_control sc = {
> > +		.nr_reclaimed = 0,
> >  		.may_writepage = !laptop_mode,
> >  		.may_swap = 1,
> >  		.swap_cluster_max = SWAP_CLUSTER_MAX,
> > @@ -2245,6 +2247,7 @@ static int __zone_reclaim(struct zone *z
> >  	struct reclaim_state reclaim_state;
> >  	int priority;
> >  	struct scan_control sc = {
> > +		.nr_reclaimed = 0,
> >  		.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> >  		.may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> >  		.swap_cluster_max = max_t(unsigned long, nr_pages,
> 
> Confused.  The compiler already initialises any unmentioned fields to zero,
> so this patch has no effect.

Oh, nice, I was actually testing the wrong thing!

	struct foo foo;

wouldn't do that.  But

	struct foo foo = { .a = 5 };

actually would initialize foo.b = 0.

Sorry.  Please ignore this patch and the other one regarding the
explicit initialization of sc.order.  :(

	Hannes

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
  2009-02-09 22:24 ` Johannes Weiner
@ 2009-02-10 22:16   ` Andrew Morton
  -1 siblings, 0 replies; 44+ messages in thread
From: Andrew Morton @ 2009-02-10 22:16 UTC (permalink / raw)
  To: Johannes Weiner; +Cc: riel, wli, kosaki.motohiro, linux-kernel, linux-mm

On Mon, 9 Feb 2009 23:24:16 +0100
Johannes Weiner <hannes@cmpxchg.org> wrote:

> The commit missed to actually adjust do_try_to_free_pages() which now
> does not initialize sc.nr_reclaimed and makes shrink_zone() make
> assumptions on whether to bail out of the reclaim cycle based on an
> uninitialized value.

Both callers of do_try_to_free_pages() _do_ initialise
scan_control.nr_reclaimed.  The unitemised fields in a struct
initaliser are reliably zeroed.

We often rely upon this, and the only reason for mentioning such a
field is for documentation reasons, or if you want to add a comment at
the initialisation site.



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages()
@ 2009-02-10 22:16   ` Andrew Morton
  0 siblings, 0 replies; 44+ messages in thread
From: Andrew Morton @ 2009-02-10 22:16 UTC (permalink / raw)
  To: Johannes Weiner; +Cc: riel, wli, kosaki.motohiro, linux-kernel, linux-mm

On Mon, 9 Feb 2009 23:24:16 +0100
Johannes Weiner <hannes@cmpxchg.org> wrote:

> The commit missed to actually adjust do_try_to_free_pages() which now
> does not initialize sc.nr_reclaimed and makes shrink_zone() make
> assumptions on whether to bail out of the reclaim cycle based on an
> uninitialized value.

Both callers of do_try_to_free_pages() _do_ initialise
scan_control.nr_reclaimed.  The unitemised fields in a struct
initaliser are reliably zeroed.

We often rely upon this, and the only reason for mentioning such a
field is for documentation reasons, or if you want to add a comment at
the initialisation site.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
  2009-02-10 20:41                       ` KOSAKI Motohiro
@ 2009-02-11  0:37                         ` MinChan Kim
  -1 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-11  0:37 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Johannes Weiner, Rik van Riel, Rafael J. Wysocki,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

On Wed, Feb 11, 2009 at 05:41:21AM +0900, KOSAKI Motohiro wrote:
> >>  {
> >>       struct zone *zone;
> >> -     unsigned long nr_to_scan, ret = 0;
> >> +     unsigned long nr_to_scan;
> >>       enum lru_list l;
> >
> > Basing it on swsusp-clean-up-shrink_all_zones.patch probably makes it
> > easier for Andrew to pick it up.
> 
> ok, thanks.
> 
> >>                       reclaim_state.reclaimed_slab = 0;
> >> -                     shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
> >> -                     ret += reclaim_state.reclaimed_slab;
> >> -             } while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
> >> +                     shrink_slab(nr_pages, sc.gfp_mask,
> >> +                                 global_lru_pages());
> >> +                     sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> >> +             } while (sc.nr_reclaimed < nr_pages &&
> >> +                      reclaim_state.reclaimed_slab > 0);
> >
> > :(
> >
> > Is this really an improvement?  `ret' is better to read than
> > `sc.nr_reclaimed'.
> 
> I know it's debetable thing.
> but I still think code consistency is important than variable name preference.

How about this ?

I followed do_try_to_free_pages coding style.
It use both 'sc->nr_reclaimed' and 'ret'.
It can support code consistency and readability. 

So, I think it would be better.  
If you don't mind, I will resend with your sign-off.

Signed-off-by: MinChan Kim <minchan.kim@gmail.com>
---
 mm/vmscan.c |   43 +++++++++++++++++++++++++------------------
 1 files changed, 25 insertions(+), 18 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9a27c44..989062a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2048,16 +2048,16 @@ unsigned long global_lru_pages(void)
 #ifdef CONFIG_PM
 /*
  * Helper function for shrink_all_memory().  Tries to reclaim 'nr_pages' pages
- * from LRU lists system-wide, for given pass and priority, and returns the
- * number of reclaimed pages
+ * from LRU lists system-wide, for given pass and priority.
  *
  * For pass > 3 we also try to shrink the LRU lists that contain a few pages
  */
-static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
+static void shrink_all_zones(unsigned long nr_pages, int prio,
 				      int pass, struct scan_control *sc)
 {
 	struct zone *zone;
 	unsigned long nr_to_scan, ret = 0;
+	unsigned long nr_reclaimed = sc->nr_reclaimed;
 	enum lru_list l;
 
 	for_each_zone(zone) {
@@ -2082,15 +2082,14 @@ static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
 				nr_to_scan = min(nr_pages,
 					zone_page_state(zone,
 							NR_LRU_BASE + l));
-				ret += shrink_list(l, nr_to_scan, zone,
+				nr_reclaimed += shrink_list(l, nr_to_scan, zone,
 								sc, prio);
-				if (ret >= nr_pages)
-					return ret;
-			}
+				if (nr_reclaimed >= nr_pages) 
+					break;
 		}
 	}
 
-	return ret;
+	sc->nr_reclaimed = nr_reclaimed;
 }
 
 /*
@@ -2127,9 +2126,11 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
 		if (!reclaim_state.reclaimed_slab)
 			break;
 
-		ret += reclaim_state.reclaimed_slab;
-		if (ret >= nr_pages)
+		sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+		if (sc.nr_reclaimed >= nr_pages) {
+			ret = sc.nr_reclaimed;
 			goto out;
+		}
 
 		nr_slab -= reclaim_state.reclaimed_slab;
 	}
@@ -2152,19 +2153,23 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
 		}
 
 		for (prio = DEF_PRIORITY; prio >= 0; prio--) {
-			unsigned long nr_to_scan = nr_pages - ret;
+			unsigned long nr_to_scan = nr_pages - sc.nr_reclaimed;
 
 			sc.nr_scanned = 0;
-			ret += shrink_all_zones(nr_to_scan, prio, pass, &sc);
-			if (ret >= nr_pages)
+			shrink_all_zones(nr_to_scan, prio, pass, &sc);
+			if (sc.nr_reclaimed >= nr_pages) {
+				ret = sc.nr_reclaimed;
 				goto out;
+			}
 
 			reclaim_state.reclaimed_slab = 0;
 			shrink_slab(sc.nr_scanned, sc.gfp_mask,
 					global_lru_pages());
-			ret += reclaim_state.reclaimed_slab;
-			if (ret >= nr_pages)
+			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+			if (sc.nr_reclaimed >= nr_pages) {
+				ret = sc.nr_reclaimed;
 				goto out;
+			}
 
 			if (sc.nr_scanned && prio < DEF_PRIORITY - 2)
 				congestion_wait(WRITE, HZ / 10);
@@ -2175,14 +2180,16 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
 	 * If ret = 0, we could not shrink LRUs, but there may be something
 	 * in slab caches
 	 */
-	if (!ret) {
+	if (!sc.nr_reclaimed) {
 		do {
 			reclaim_state.reclaimed_slab = 0;
 			shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
-			ret += reclaim_state.reclaimed_slab;
-		} while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
+			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+		} while (sc.nr_reclaimed < nr_pages && reclaim_state.reclaimed_slab > 0);
 	}
 
+	ret = sc.nr_reclaimed;
+
 out:
 	current->reclaim_state = NULL;
 
-- 
1.5.4.3

-- 

Kinds Regards
MinChan Kim


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
@ 2009-02-11  0:37                         ` MinChan Kim
  0 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-11  0:37 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: Johannes Weiner, Rik van Riel, Rafael J. Wysocki,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

On Wed, Feb 11, 2009 at 05:41:21AM +0900, KOSAKI Motohiro wrote:
> >>  {
> >>       struct zone *zone;
> >> -     unsigned long nr_to_scan, ret = 0;
> >> +     unsigned long nr_to_scan;
> >>       enum lru_list l;
> >
> > Basing it on swsusp-clean-up-shrink_all_zones.patch probably makes it
> > easier for Andrew to pick it up.
> 
> ok, thanks.
> 
> >>                       reclaim_state.reclaimed_slab = 0;
> >> -                     shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
> >> -                     ret += reclaim_state.reclaimed_slab;
> >> -             } while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
> >> +                     shrink_slab(nr_pages, sc.gfp_mask,
> >> +                                 global_lru_pages());
> >> +                     sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> >> +             } while (sc.nr_reclaimed < nr_pages &&
> >> +                      reclaim_state.reclaimed_slab > 0);
> >
> > :(
> >
> > Is this really an improvement?  `ret' is better to read than
> > `sc.nr_reclaimed'.
> 
> I know it's debetable thing.
> but I still think code consistency is important than variable name preference.

How about this ?

I followed do_try_to_free_pages coding style.
It use both 'sc->nr_reclaimed' and 'ret'.
It can support code consistency and readability. 

So, I think it would be better.  
If you don't mind, I will resend with your sign-off.

Signed-off-by: MinChan Kim <minchan.kim@gmail.com>
---
 mm/vmscan.c |   43 +++++++++++++++++++++++++------------------
 1 files changed, 25 insertions(+), 18 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9a27c44..989062a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2048,16 +2048,16 @@ unsigned long global_lru_pages(void)
 #ifdef CONFIG_PM
 /*
  * Helper function for shrink_all_memory().  Tries to reclaim 'nr_pages' pages
- * from LRU lists system-wide, for given pass and priority, and returns the
- * number of reclaimed pages
+ * from LRU lists system-wide, for given pass and priority.
  *
  * For pass > 3 we also try to shrink the LRU lists that contain a few pages
  */
-static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
+static void shrink_all_zones(unsigned long nr_pages, int prio,
 				      int pass, struct scan_control *sc)
 {
 	struct zone *zone;
 	unsigned long nr_to_scan, ret = 0;
+	unsigned long nr_reclaimed = sc->nr_reclaimed;
 	enum lru_list l;
 
 	for_each_zone(zone) {
@@ -2082,15 +2082,14 @@ static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
 				nr_to_scan = min(nr_pages,
 					zone_page_state(zone,
 							NR_LRU_BASE + l));
-				ret += shrink_list(l, nr_to_scan, zone,
+				nr_reclaimed += shrink_list(l, nr_to_scan, zone,
 								sc, prio);
-				if (ret >= nr_pages)
-					return ret;
-			}
+				if (nr_reclaimed >= nr_pages) 
+					break;
 		}
 	}
 
-	return ret;
+	sc->nr_reclaimed = nr_reclaimed;
 }
 
 /*
@@ -2127,9 +2126,11 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
 		if (!reclaim_state.reclaimed_slab)
 			break;
 
-		ret += reclaim_state.reclaimed_slab;
-		if (ret >= nr_pages)
+		sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+		if (sc.nr_reclaimed >= nr_pages) {
+			ret = sc.nr_reclaimed;
 			goto out;
+		}
 
 		nr_slab -= reclaim_state.reclaimed_slab;
 	}
@@ -2152,19 +2153,23 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
 		}
 
 		for (prio = DEF_PRIORITY; prio >= 0; prio--) {
-			unsigned long nr_to_scan = nr_pages - ret;
+			unsigned long nr_to_scan = nr_pages - sc.nr_reclaimed;
 
 			sc.nr_scanned = 0;
-			ret += shrink_all_zones(nr_to_scan, prio, pass, &sc);
-			if (ret >= nr_pages)
+			shrink_all_zones(nr_to_scan, prio, pass, &sc);
+			if (sc.nr_reclaimed >= nr_pages) {
+				ret = sc.nr_reclaimed;
 				goto out;
+			}
 
 			reclaim_state.reclaimed_slab = 0;
 			shrink_slab(sc.nr_scanned, sc.gfp_mask,
 					global_lru_pages());
-			ret += reclaim_state.reclaimed_slab;
-			if (ret >= nr_pages)
+			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+			if (sc.nr_reclaimed >= nr_pages) {
+				ret = sc.nr_reclaimed;
 				goto out;
+			}
 
 			if (sc.nr_scanned && prio < DEF_PRIORITY - 2)
 				congestion_wait(WRITE, HZ / 10);
@@ -2175,14 +2180,16 @@ unsigned long shrink_all_memory(unsigned long nr_pages)
 	 * If ret = 0, we could not shrink LRUs, but there may be something
 	 * in slab caches
 	 */
-	if (!ret) {
+	if (!sc.nr_reclaimed) {
 		do {
 			reclaim_state.reclaimed_slab = 0;
 			shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
-			ret += reclaim_state.reclaimed_slab;
-		} while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
+			sc.nr_reclaimed += reclaim_state.reclaimed_slab;
+		} while (sc.nr_reclaimed < nr_pages && reclaim_state.reclaimed_slab > 0);
 	}
 
+	ret = sc.nr_reclaimed;
+
 out:
 	current->reclaim_state = NULL;
 
-- 
1.5.4.3

-- 

Kinds Regards
MinChan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH] vmscan: initialize sc->nr_reclaimed properly take2
  2009-02-10 22:06                   ` Andrew Morton
@ 2009-02-11 10:52                     ` KOSAKI Motohiro
  -1 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-11 10:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: kosaki.motohiro, minchan.kim, hannes, riel, wli, linux-kernel, linux-mm

> > @@ -2245,6 +2247,7 @@ static int __zone_reclaim(struct zone *z
> >  	struct reclaim_state reclaim_state;
> >  	int priority;
> >  	struct scan_control sc = {
> > +		.nr_reclaimed = 0,
> >  		.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> >  		.may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> >  		.swap_cluster_max = max_t(unsigned long, nr_pages,
> 
> Confused.  The compiler already initialises any unmentioned fields to zero,
> so this patch has no effect.

maybe, I was slept too ;-/





^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] vmscan: initialize sc->nr_reclaimed properly take2
@ 2009-02-11 10:52                     ` KOSAKI Motohiro
  0 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-11 10:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: kosaki.motohiro, minchan.kim, hannes, riel, wli, linux-kernel, linux-mm

> > @@ -2245,6 +2247,7 @@ static int __zone_reclaim(struct zone *z
> >  	struct reclaim_state reclaim_state;
> >  	int priority;
> >  	struct scan_control sc = {
> > +		.nr_reclaimed = 0,
> >  		.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> >  		.may_swap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> >  		.swap_cluster_max = max_t(unsigned long, nr_pages,
> 
> Confused.  The compiler already initialises any unmentioned fields to zero,
> so this patch has no effect.

maybe, I was slept too ;-/




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
  2009-02-11  0:37                         ` MinChan Kim
@ 2009-02-11 11:50                           ` KOSAKI Motohiro
  -1 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-11 11:50 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	Rafael J. Wysocki, William Lee Irwin III, Andrew Morton,
	linux-kernel, linux-mm

> On Wed, Feb 11, 2009 at 05:41:21AM +0900, KOSAKI Motohiro wrote:
> > >>  {
> > >>       struct zone *zone;
> > >> -     unsigned long nr_to_scan, ret = 0;
> > >> +     unsigned long nr_to_scan;
> > >>       enum lru_list l;
> > >
> > > Basing it on swsusp-clean-up-shrink_all_zones.patch probably makes it
> > > easier for Andrew to pick it up.
> > 
> > ok, thanks.
> > 
> > >>                       reclaim_state.reclaimed_slab = 0;
> > >> -                     shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
> > >> -                     ret += reclaim_state.reclaimed_slab;
> > >> -             } while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
> > >> +                     shrink_slab(nr_pages, sc.gfp_mask,
> > >> +                                 global_lru_pages());
> > >> +                     sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> > >> +             } while (sc.nr_reclaimed < nr_pages &&
> > >> +                      reclaim_state.reclaimed_slab > 0);
> > >
> > > :(
> > >
> > > Is this really an improvement?  `ret' is better to read than
> > > `sc.nr_reclaimed'.
> > 
> > I know it's debetable thing.
> > but I still think code consistency is important than variable name preference.
> 
> How about this ?
> 
> I followed do_try_to_free_pages coding style.
> It use both 'sc->nr_reclaimed' and 'ret'.
> It can support code consistency and readability. 
> 
> So, I think it would be better.  
> If you don't mind, I will resend with your sign-off.

looks good. thanks.


> -static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
> +static void shrink_all_zones(unsigned long nr_pages, int prio,
>  				      int pass, struct scan_control *sc)
>  {
>  	struct zone *zone;
>  	unsigned long nr_to_scan, ret = 0;
> +	unsigned long nr_reclaimed = sc->nr_reclaimed;
>  	enum lru_list l;

and, please changelog change.
this patch have behavior change.

old bale-out checking didn't checked properly.
it's because shrink_all_memory() has five pass. but shrink_all_zones()
initialize ret = 0 every time.

then, at pass 1-4, if(ret >= nr_pages) don't judge reclaimed enough page or not.





^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
@ 2009-02-11 11:50                           ` KOSAKI Motohiro
  0 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-11 11:50 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	Rafael J. Wysocki, William Lee Irwin III, Andrew Morton,
	linux-kernel, linux-mm

> On Wed, Feb 11, 2009 at 05:41:21AM +0900, KOSAKI Motohiro wrote:
> > >>  {
> > >>       struct zone *zone;
> > >> -     unsigned long nr_to_scan, ret = 0;
> > >> +     unsigned long nr_to_scan;
> > >>       enum lru_list l;
> > >
> > > Basing it on swsusp-clean-up-shrink_all_zones.patch probably makes it
> > > easier for Andrew to pick it up.
> > 
> > ok, thanks.
> > 
> > >>                       reclaim_state.reclaimed_slab = 0;
> > >> -                     shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
> > >> -                     ret += reclaim_state.reclaimed_slab;
> > >> -             } while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
> > >> +                     shrink_slab(nr_pages, sc.gfp_mask,
> > >> +                                 global_lru_pages());
> > >> +                     sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> > >> +             } while (sc.nr_reclaimed < nr_pages &&
> > >> +                      reclaim_state.reclaimed_slab > 0);
> > >
> > > :(
> > >
> > > Is this really an improvement?  `ret' is better to read than
> > > `sc.nr_reclaimed'.
> > 
> > I know it's debetable thing.
> > but I still think code consistency is important than variable name preference.
> 
> How about this ?
> 
> I followed do_try_to_free_pages coding style.
> It use both 'sc->nr_reclaimed' and 'ret'.
> It can support code consistency and readability. 
> 
> So, I think it would be better.  
> If you don't mind, I will resend with your sign-off.

looks good. thanks.


> -static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
> +static void shrink_all_zones(unsigned long nr_pages, int prio,
>  				      int pass, struct scan_control *sc)
>  {
>  	struct zone *zone;
>  	unsigned long nr_to_scan, ret = 0;
> +	unsigned long nr_reclaimed = sc->nr_reclaimed;
>  	enum lru_list l;

and, please changelog change.
this patch have behavior change.

old bale-out checking didn't checked properly.
it's because shrink_all_memory() has five pass. but shrink_all_zones()
initialize ret = 0 every time.

then, at pass 1-4, if(ret >= nr_pages) don't judge reclaimed enough page or not.




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
  2009-02-11 11:50                           ` KOSAKI Motohiro
@ 2009-02-11 12:43                             ` MinChan Kim
  -1 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-11 12:43 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: MinChan Kim, Johannes Weiner, Rik van Riel, Rafael J. Wysocki,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

On Wed, 11 Feb 2009 20:50:37 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:

> > On Wed, Feb 11, 2009 at 05:41:21AM +0900, KOSAKI Motohiro wrote:
> > > >>  {
> > > >>       struct zone *zone;
> > > >> -     unsigned long nr_to_scan, ret = 0;
> > > >> +     unsigned long nr_to_scan;
> > > >>       enum lru_list l;
> > > >
> > > > Basing it on swsusp-clean-up-shrink_all_zones.patch probably makes it
> > > > easier for Andrew to pick it up.
> > > 
> > > ok, thanks.
> > > 
> > > >>                       reclaim_state.reclaimed_slab = 0;
> > > >> -                     shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
> > > >> -                     ret += reclaim_state.reclaimed_slab;
> > > >> -             } while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
> > > >> +                     shrink_slab(nr_pages, sc.gfp_mask,
> > > >> +                                 global_lru_pages());
> > > >> +                     sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> > > >> +             } while (sc.nr_reclaimed < nr_pages &&
> > > >> +                      reclaim_state.reclaimed_slab > 0);
> > > >
> > > > :(
> > > >
> > > > Is this really an improvement?  `ret' is better to read than
> > > > `sc.nr_reclaimed'.
> > > 
> > > I know it's debetable thing.
> > > but I still think code consistency is important than variable name preference.
> > 
> > How about this ?
> > 
> > I followed do_try_to_free_pages coding style.
> > It use both 'sc->nr_reclaimed' and 'ret'.
> > It can support code consistency and readability. 
> > 
> > So, I think it would be better.  
> > If you don't mind, I will resend with your sign-off.
> 
> looks good. thanks.
> 
> 
> > -static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
> > +static void shrink_all_zones(unsigned long nr_pages, int prio,
> >  				      int pass, struct scan_control *sc)
> >  {
> >  	struct zone *zone;
> >  	unsigned long nr_to_scan, ret = 0;
> > +	unsigned long nr_reclaimed = sc->nr_reclaimed;
> >  	enum lru_list l;
> 
> and, please changelog change.
> this patch have behavior change.
> 
> old bale-out checking didn't checked properly.
> it's because shrink_all_memory() has five pass. but shrink_all_zones()
> initialize ret = 0 every time.
> 
> then, at pass 1-4, if(ret >= nr_pages) don't judge reclaimed enough page or not.
> 

Hmm, I think old bale-out code is right. 
In shrink_all_memory, As more reclaiming with pass progressing, 
the smaller nr_to_scan is. The nr_to_scan is the number of page shrinking which
user want. 
The shrink_all_zones have to reclaim nr_to_scan's page by doing best effort.
So, If you use accumulation of reclaim, it can break bale-out in shrink_all_zones.
I mean here.

'
              NR_LRU_BASE + l)); 
        ret += shrink_list(l, nr_to_scan, zone,
                sc, prio);
        if (ret >= nr_pages)
          return ret; 
      }    
'

I have to make patch again so that it will keep on old bale-out behavior. 

-- 
Kinds Regards
MinChan Kim

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
@ 2009-02-11 12:43                             ` MinChan Kim
  0 siblings, 0 replies; 44+ messages in thread
From: MinChan Kim @ 2009-02-11 12:43 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: MinChan Kim, Johannes Weiner, Rik van Riel, Rafael J. Wysocki,
	William Lee Irwin III, Andrew Morton, linux-kernel, linux-mm

On Wed, 11 Feb 2009 20:50:37 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:

> > On Wed, Feb 11, 2009 at 05:41:21AM +0900, KOSAKI Motohiro wrote:
> > > >>  {
> > > >>       struct zone *zone;
> > > >> -     unsigned long nr_to_scan, ret = 0;
> > > >> +     unsigned long nr_to_scan;
> > > >>       enum lru_list l;
> > > >
> > > > Basing it on swsusp-clean-up-shrink_all_zones.patch probably makes it
> > > > easier for Andrew to pick it up.
> > > 
> > > ok, thanks.
> > > 
> > > >>                       reclaim_state.reclaimed_slab = 0;
> > > >> -                     shrink_slab(nr_pages, sc.gfp_mask, global_lru_pages());
> > > >> -                     ret += reclaim_state.reclaimed_slab;
> > > >> -             } while (ret < nr_pages && reclaim_state.reclaimed_slab > 0);
> > > >> +                     shrink_slab(nr_pages, sc.gfp_mask,
> > > >> +                                 global_lru_pages());
> > > >> +                     sc.nr_reclaimed += reclaim_state.reclaimed_slab;
> > > >> +             } while (sc.nr_reclaimed < nr_pages &&
> > > >> +                      reclaim_state.reclaimed_slab > 0);
> > > >
> > > > :(
> > > >
> > > > Is this really an improvement?  `ret' is better to read than
> > > > `sc.nr_reclaimed'.
> > > 
> > > I know it's debetable thing.
> > > but I still think code consistency is important than variable name preference.
> > 
> > How about this ?
> > 
> > I followed do_try_to_free_pages coding style.
> > It use both 'sc->nr_reclaimed' and 'ret'.
> > It can support code consistency and readability. 
> > 
> > So, I think it would be better.  
> > If you don't mind, I will resend with your sign-off.
> 
> looks good. thanks.
> 
> 
> > -static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
> > +static void shrink_all_zones(unsigned long nr_pages, int prio,
> >  				      int pass, struct scan_control *sc)
> >  {
> >  	struct zone *zone;
> >  	unsigned long nr_to_scan, ret = 0;
> > +	unsigned long nr_reclaimed = sc->nr_reclaimed;
> >  	enum lru_list l;
> 
> and, please changelog change.
> this patch have behavior change.
> 
> old bale-out checking didn't checked properly.
> it's because shrink_all_memory() has five pass. but shrink_all_zones()
> initialize ret = 0 every time.
> 
> then, at pass 1-4, if(ret >= nr_pages) don't judge reclaimed enough page or not.
> 

Hmm, I think old bale-out code is right. 
In shrink_all_memory, As more reclaiming with pass progressing, 
the smaller nr_to_scan is. The nr_to_scan is the number of page shrinking which
user want. 
The shrink_all_zones have to reclaim nr_to_scan's page by doing best effort.
So, If you use accumulation of reclaim, it can break bale-out in shrink_all_zones.
I mean here.

'
              NR_LRU_BASE + l)); 
        ret += shrink_list(l, nr_to_scan, zone,
                sc, prio);
        if (ret >= nr_pages)
          return ret; 
      }    
'

I have to make patch again so that it will keep on old bale-out behavior. 

-- 
Kinds Regards
MinChan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
  2009-02-11 12:43                             ` MinChan Kim
@ 2009-02-11 12:58                               ` KOSAKI Motohiro
  -1 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-11 12:58 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	Rafael J. Wysocki, William Lee Irwin III, Andrew Morton,
	linux-kernel, linux-mm

> Hmm, I think old bale-out code is right. 
> In shrink_all_memory, As more reclaiming with pass progressing, 
> the smaller nr_to_scan is. The nr_to_scan is the number of page shrinking which
> user want. 
> The shrink_all_zones have to reclaim nr_to_scan's page by doing best effort.
> So, If you use accumulation of reclaim, it can break bale-out in shrink_all_zones.

you are right. thanks.
shrink_all_zones()'s nr_pages != shrink_all_memory()'s nr_pages.
(I don't like this misleading variable name scheme ;)


> I mean here.
> 
> '
>               NR_LRU_BASE + l)); 
>         ret += shrink_list(l, nr_to_scan, zone,
>                 sc, prio);
>         if (ret >= nr_pages)
>           return ret; 
>       }    
> '
> 
> I have to make patch again so that it will keep on old bale-out behavior. 

Sure.
thanks.




^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
@ 2009-02-11 12:58                               ` KOSAKI Motohiro
  0 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-11 12:58 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	Rafael J. Wysocki, William Lee Irwin III, Andrew Morton,
	linux-kernel, linux-mm

> Hmm, I think old bale-out code is right. 
> In shrink_all_memory, As more reclaiming with pass progressing, 
> the smaller nr_to_scan is. The nr_to_scan is the number of page shrinking which
> user want. 
> The shrink_all_zones have to reclaim nr_to_scan's page by doing best effort.
> So, If you use accumulation of reclaim, it can break bale-out in shrink_all_zones.

you are right. thanks.
shrink_all_zones()'s nr_pages != shrink_all_memory()'s nr_pages.
(I don't like this misleading variable name scheme ;)


> I mean here.
> 
> '
>               NR_LRU_BASE + l)); 
>         ret += shrink_list(l, nr_to_scan, zone,
>                 sc, prio);
>         if (ret >= nr_pages)
>           return ret; 
>       }    
> '
> 
> I have to make patch again so that it will keep on old bale-out behavior. 

Sure.
thanks.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
  2009-02-11 12:58                               ` KOSAKI Motohiro
@ 2009-02-11 13:03                                 ` KOSAKI Motohiro
  -1 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-11 13:03 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	Rafael J. Wysocki, William Lee Irwin III, Andrew Morton,
	linux-kernel, linux-mm

> > I mean here.
> > 
> > '
> >               NR_LRU_BASE + l)); 
> >         ret += shrink_list(l, nr_to_scan, zone,
> >                 sc, prio);
> >         if (ret >= nr_pages)
> >           return ret; 
> >       }    
> > '
> > 
> > I have to make patch again so that it will keep on old bale-out behavior. 
> 
> Sure.
> thanks.

As I mean,

@@ -2082,15 +2082,14 @@ static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
 				nr_to_scan = min(nr_pages,
 					zone_page_state(zone,
 							NR_LRU_BASE + l));
				nr_reclaimed += shrink_list(l, nr_to_scan, zone,
								sc, prio);
-				if (nr_reclaimed >= nr_pages) 
-				if (nr_reclaimed > sc.swap_cluster_max) 
					break;
 		}
 	}


this is keep old behavior and consist shrink_zone(), I think.



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH] shrink_all_memory() use sc.nr_reclaimed
@ 2009-02-11 13:03                                 ` KOSAKI Motohiro
  0 siblings, 0 replies; 44+ messages in thread
From: KOSAKI Motohiro @ 2009-02-11 13:03 UTC (permalink / raw)
  To: MinChan Kim
  Cc: kosaki.motohiro, Johannes Weiner, Rik van Riel,
	Rafael J. Wysocki, William Lee Irwin III, Andrew Morton,
	linux-kernel, linux-mm

> > I mean here.
> > 
> > '
> >               NR_LRU_BASE + l)); 
> >         ret += shrink_list(l, nr_to_scan, zone,
> >                 sc, prio);
> >         if (ret >= nr_pages)
> >           return ret; 
> >       }    
> > '
> > 
> > I have to make patch again so that it will keep on old bale-out behavior. 
> 
> Sure.
> thanks.

As I mean,

@@ -2082,15 +2082,14 @@ static unsigned long shrink_all_zones(unsigned long nr_pages, int prio,
 				nr_to_scan = min(nr_pages,
 					zone_page_state(zone,
 							NR_LRU_BASE + l));
				nr_reclaimed += shrink_list(l, nr_to_scan, zone,
								sc, prio);
-				if (nr_reclaimed >= nr_pages) 
-				if (nr_reclaimed > sc.swap_cluster_max) 
					break;
 		}
 	}


this is keep old behavior and consist shrink_zone(), I think.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2009-02-11 13:03 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-02-09 22:24 [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages() Johannes Weiner
2009-02-09 22:24 ` Johannes Weiner
2009-02-10 10:47 ` MinChan Kim
2009-02-10 10:47   ` MinChan Kim
2009-02-10 11:43   ` KOSAKI Motohiro
2009-02-10 11:43     ` KOSAKI Motohiro
2009-02-10 12:03     ` MinChan Kim
2009-02-10 12:03       ` MinChan Kim
2009-02-10 12:06       ` KOSAKI Motohiro
2009-02-10 12:06         ` KOSAKI Motohiro
2009-02-10 12:31         ` MinChan Kim
2009-02-10 12:31           ` MinChan Kim
2009-02-10 12:35           ` KOSAKI Motohiro
2009-02-10 12:35             ` KOSAKI Motohiro
2009-02-10 12:40             ` MinChan Kim
2009-02-10 12:40               ` MinChan Kim
2009-02-10 12:58               ` [PATCH] vmscan: initialize sc->nr_reclaimed properly take2 KOSAKI Motohiro
2009-02-10 12:58                 ` KOSAKI Motohiro
2009-02-10 13:00                 ` [PATCH] shrink_all_memory() use sc.nr_reclaimed KOSAKI Motohiro
2009-02-10 13:00                   ` KOSAKI Motohiro
2009-02-10 16:20                   ` Johannes Weiner
2009-02-10 16:20                     ` Johannes Weiner
2009-02-10 20:41                     ` KOSAKI Motohiro
2009-02-10 20:41                       ` KOSAKI Motohiro
2009-02-11  0:37                       ` MinChan Kim
2009-02-11  0:37                         ` MinChan Kim
2009-02-11 11:50                         ` KOSAKI Motohiro
2009-02-11 11:50                           ` KOSAKI Motohiro
2009-02-11 12:43                           ` MinChan Kim
2009-02-11 12:43                             ` MinChan Kim
2009-02-11 12:58                             ` KOSAKI Motohiro
2009-02-11 12:58                               ` KOSAKI Motohiro
2009-02-11 13:03                               ` KOSAKI Motohiro
2009-02-11 13:03                                 ` KOSAKI Motohiro
2009-02-10 16:09                 ` [PATCH] vmscan: initialize sc->nr_reclaimed properly take2 Johannes Weiner
2009-02-10 16:09                   ` Johannes Weiner
2009-02-10 22:06                 ` Andrew Morton
2009-02-10 22:06                   ` Andrew Morton
2009-02-10 22:15                   ` Johannes Weiner
2009-02-10 22:15                     ` Johannes Weiner
2009-02-11 10:52                   ` KOSAKI Motohiro
2009-02-11 10:52                     ` KOSAKI Motohiro
2009-02-10 22:16 ` [RFC] vmscan: initialize sc->nr_reclaimed in do_try_to_free_pages() Andrew Morton
2009-02-10 22:16   ` Andrew Morton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.