All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages, not page order
@ 2010-06-25 11:21 ` KOSAKI Motohiro
  0 siblings, 0 replies; 18+ messages in thread
From: KOSAKI Motohiro @ 2010-06-25 11:21 UTC (permalink / raw)
  To: LKML, linux-mm, Andrew Morton, Christoph Lameter, Mel Gorman,
	Rik van Riel, Minchan Kim, Johannes Weiner
  Cc: kosaki.motohiro

Fix simple argument error. Usually 'order' is very small value than
lru_pages. then it can makes unnecessary icache dropping.

Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
 mm/vmscan.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0ebfe12..c927948 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2611,6 +2611,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
 
 	slab_reclaimable = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
 	if (slab_reclaimable > zone->min_slab_pages) {
+		unsigned long lru_pages = zone_reclaimable_pages(zone);
 		/*
 		 * shrink_slab() does not currently allow us to determine how
 		 * many pages were freed in this zone. So we take the current
@@ -2621,7 +2622,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
 		 * Note that shrink_slab will free memory on all zones and may
 		 * take a long time.
 		 */
-		while (shrink_slab(sc.nr_scanned, gfp_mask, order) &&
+		while (shrink_slab(sc.nr_scanned, gfp_mask, lru_pages) &&
 			zone_page_state(zone, NR_SLAB_RECLAIMABLE) >
 				slab_reclaimable - nr_pages)
 			;
-- 
1.6.5.2




^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages, not page order
@ 2010-06-25 11:21 ` KOSAKI Motohiro
  0 siblings, 0 replies; 18+ messages in thread
From: KOSAKI Motohiro @ 2010-06-25 11:21 UTC (permalink / raw)
  To: LKML, linux-mm, Andrew Morton, Christoph Lameter, Mel Gorman,
	Rik van Riel, Minchan Kim, Johannes Weiner
  Cc: kosaki.motohiro

Fix simple argument error. Usually 'order' is very small value than
lru_pages. then it can makes unnecessary icache dropping.

Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
 mm/vmscan.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0ebfe12..c927948 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2611,6 +2611,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
 
 	slab_reclaimable = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
 	if (slab_reclaimable > zone->min_slab_pages) {
+		unsigned long lru_pages = zone_reclaimable_pages(zone);
 		/*
 		 * shrink_slab() does not currently allow us to determine how
 		 * many pages were freed in this zone. So we take the current
@@ -2621,7 +2622,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
 		 * Note that shrink_slab will free memory on all zones and may
 		 * take a long time.
 		 */
-		while (shrink_slab(sc.nr_scanned, gfp_mask, order) &&
+		while (shrink_slab(sc.nr_scanned, gfp_mask, lru_pages) &&
 			zone_page_state(zone, NR_SLAB_RECLAIMABLE) >
 				slab_reclaimable - nr_pages)
 			;
-- 
1.6.5.2



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/2] vmscan: don't subtraction of unsined
  2010-06-25 11:21 ` KOSAKI Motohiro
@ 2010-06-25 11:23   ` KOSAKI Motohiro
  -1 siblings, 0 replies; 18+ messages in thread
From: KOSAKI Motohiro @ 2010-06-25 11:23 UTC (permalink / raw)
  To: LKML, linux-mm, Andrew Morton, Christoph Lameter, Mel Gorman,
	Rik van Riel, Minchan Kim, Johannes Weiner
  Cc: kosaki.motohiro

'slab_reclaimable' and 'nr_pages' are unsigned. so, subtraction is
unsafe.

Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
 mm/vmscan.c |   18 ++++++++++++------
 1 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index c927948..b1a90f8 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2612,6 +2612,8 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
 	slab_reclaimable = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
 	if (slab_reclaimable > zone->min_slab_pages) {
 		unsigned long lru_pages = zone_reclaimable_pages(zone);
+		unsigned long srec_new;
+
 		/*
 		 * shrink_slab() does not currently allow us to determine how
 		 * many pages were freed in this zone. So we take the current
@@ -2622,17 +2624,21 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
 		 * Note that shrink_slab will free memory on all zones and may
 		 * take a long time.
 		 */
-		while (shrink_slab(sc.nr_scanned, gfp_mask, lru_pages) &&
-			zone_page_state(zone, NR_SLAB_RECLAIMABLE) >
-				slab_reclaimable - nr_pages)
-			;
+		for (;;) {
+			if (!shrink_slab(sc.nr_scanned, gfp_mask, lru_pages))
+				break;
+			srec_new = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
+			if (srec_new + nr_pages <= slab_reclaimable)
+				break;
+		}
 
 		/*
 		 * Update nr_reclaimed by the number of slab pages we
 		 * reclaimed from this zone.
 		 */
-		sc.nr_reclaimed += slab_reclaimable -
-			zone_page_state(zone, NR_SLAB_RECLAIMABLE);
+		srec_new = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
+		if (srec_new < slab_reclaimable)
+			sc.nr_reclaimed += slab_reclaimable - srec_new;
 	}
 
 	p->reclaim_state = NULL;
-- 
1.6.5.2




^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/2] vmscan: don't subtraction of unsined
@ 2010-06-25 11:23   ` KOSAKI Motohiro
  0 siblings, 0 replies; 18+ messages in thread
From: KOSAKI Motohiro @ 2010-06-25 11:23 UTC (permalink / raw)
  To: LKML, linux-mm, Andrew Morton, Christoph Lameter, Mel Gorman,
	Rik van Riel, Minchan Kim, Johannes Weiner
  Cc: kosaki.motohiro

'slab_reclaimable' and 'nr_pages' are unsigned. so, subtraction is
unsafe.

Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
---
 mm/vmscan.c |   18 ++++++++++++------
 1 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index c927948..b1a90f8 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2612,6 +2612,8 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
 	slab_reclaimable = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
 	if (slab_reclaimable > zone->min_slab_pages) {
 		unsigned long lru_pages = zone_reclaimable_pages(zone);
+		unsigned long srec_new;
+
 		/*
 		 * shrink_slab() does not currently allow us to determine how
 		 * many pages were freed in this zone. So we take the current
@@ -2622,17 +2624,21 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
 		 * Note that shrink_slab will free memory on all zones and may
 		 * take a long time.
 		 */
-		while (shrink_slab(sc.nr_scanned, gfp_mask, lru_pages) &&
-			zone_page_state(zone, NR_SLAB_RECLAIMABLE) >
-				slab_reclaimable - nr_pages)
-			;
+		for (;;) {
+			if (!shrink_slab(sc.nr_scanned, gfp_mask, lru_pages))
+				break;
+			srec_new = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
+			if (srec_new + nr_pages <= slab_reclaimable)
+				break;
+		}
 
 		/*
 		 * Update nr_reclaimed by the number of slab pages we
 		 * reclaimed from this zone.
 		 */
-		sc.nr_reclaimed += slab_reclaimable -
-			zone_page_state(zone, NR_SLAB_RECLAIMABLE);
+		srec_new = zone_page_state(zone, NR_SLAB_RECLAIMABLE);
+		if (srec_new < slab_reclaimable)
+			sc.nr_reclaimed += slab_reclaimable - srec_new;
 	}
 
 	p->reclaim_state = NULL;
-- 
1.6.5.2



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages, not page order
  2010-06-25 11:21 ` KOSAKI Motohiro
@ 2010-06-25 14:07   ` Christoph Lameter
  -1 siblings, 0 replies; 18+ messages in thread
From: Christoph Lameter @ 2010-06-25 14:07 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: LKML, linux-mm, Andrew Morton, Mel Gorman, Rik van Riel,
	Minchan Kim, Johannes Weiner

On Fri, 25 Jun 2010, KOSAKI Motohiro wrote:

> Fix simple argument error. Usually 'order' is very small value than
> lru_pages. then it can makes unnecessary icache dropping.

This is going to reduce the delta that is added to shrinker->nr
significantly thereby increasing the number of times that shrink_slab() is
called.

What does the "lru_pages" parameter do in shrink_slab()? Looks
like its only role is as a divison factor in a complex calculation of
pages to be scanned.

do_try_to_free_pages passes 0 as "lru_pages" to shrink_slab() when trying
to do cgroup lru scans. Why is that?



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages, not page order
@ 2010-06-25 14:07   ` Christoph Lameter
  0 siblings, 0 replies; 18+ messages in thread
From: Christoph Lameter @ 2010-06-25 14:07 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: LKML, linux-mm, Andrew Morton, Mel Gorman, Rik van Riel,
	Minchan Kim, Johannes Weiner

On Fri, 25 Jun 2010, KOSAKI Motohiro wrote:

> Fix simple argument error. Usually 'order' is very small value than
> lru_pages. then it can makes unnecessary icache dropping.

This is going to reduce the delta that is added to shrinker->nr
significantly thereby increasing the number of times that shrink_slab() is
called.

What does the "lru_pages" parameter do in shrink_slab()? Looks
like its only role is as a divison factor in a complex calculation of
pages to be scanned.

do_try_to_free_pages passes 0 as "lru_pages" to shrink_slab() when trying
to do cgroup lru scans. Why is that?


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] vmscan: don't subtraction of unsined
  2010-06-25 11:23   ` KOSAKI Motohiro
@ 2010-06-25 14:17     ` Christoph Lameter
  -1 siblings, 0 replies; 18+ messages in thread
From: Christoph Lameter @ 2010-06-25 14:17 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: LKML, linux-mm, Andrew Morton, Mel Gorman, Rik van Riel,
	Minchan Kim, Johannes Weiner

On Fri, 25 Jun 2010, KOSAKI Motohiro wrote:

> 'slab_reclaimable' and 'nr_pages' are unsigned. so, subtraction is
> unsafe.

Why? We are subtracting the current value of NR_SLAB_RECLAIMABLE from the
earlier one. The result can be negative (maybe concurrent allocations) and
then the nr_reclaimed gets decremented instead. This is  okay since we
have not reached our goal then of reducing the number of reclaimable slab
pages on the zone.

> @@ -2622,17 +2624,21 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
>  		 * Note that shrink_slab will free memory on all zones and may
>  		 * take a long time.
>  		 */
> -		while (shrink_slab(sc.nr_scanned, gfp_mask, lru_pages) &&
> -			zone_page_state(zone, NR_SLAB_RECLAIMABLE) >
> -				slab_reclaimable - nr_pages)

The comparison could be a problem here. So

			zone_page_state(zone, NR_SLAB_RECLAIMABLE) + nr_pages >
				slab_reclaimable

?


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] vmscan: don't subtraction of unsined
@ 2010-06-25 14:17     ` Christoph Lameter
  0 siblings, 0 replies; 18+ messages in thread
From: Christoph Lameter @ 2010-06-25 14:17 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: LKML, linux-mm, Andrew Morton, Mel Gorman, Rik van Riel,
	Minchan Kim, Johannes Weiner

On Fri, 25 Jun 2010, KOSAKI Motohiro wrote:

> 'slab_reclaimable' and 'nr_pages' are unsigned. so, subtraction is
> unsafe.

Why? We are subtracting the current value of NR_SLAB_RECLAIMABLE from the
earlier one. The result can be negative (maybe concurrent allocations) and
then the nr_reclaimed gets decremented instead. This is  okay since we
have not reached our goal then of reducing the number of reclaimable slab
pages on the zone.

> @@ -2622,17 +2624,21 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
>  		 * Note that shrink_slab will free memory on all zones and may
>  		 * take a long time.
>  		 */
> -		while (shrink_slab(sc.nr_scanned, gfp_mask, lru_pages) &&
> -			zone_page_state(zone, NR_SLAB_RECLAIMABLE) >
> -				slab_reclaimable - nr_pages)

The comparison could be a problem here. So

			zone_page_state(zone, NR_SLAB_RECLAIMABLE) + nr_pages >
				slab_reclaimable

?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages,  not page order
  2010-06-25 14:07   ` Christoph Lameter
@ 2010-06-27  1:03     ` Minchan Kim
  -1 siblings, 0 replies; 18+ messages in thread
From: Minchan Kim @ 2010-06-27  1:03 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: KOSAKI Motohiro, LKML, linux-mm, Andrew Morton, Mel Gorman,
	Rik van Riel, Johannes Weiner

On Fri, Jun 25, 2010 at 11:07 PM, Christoph Lameter
<cl@linux-foundation.org> wrote:
> On Fri, 25 Jun 2010, KOSAKI Motohiro wrote:
>
>> Fix simple argument error. Usually 'order' is very small value than
>> lru_pages. then it can makes unnecessary icache dropping.
>
> This is going to reduce the delta that is added to shrinker->nr
> significantly thereby increasing the number of times that shrink_slab() is
> called.
>
> What does the "lru_pages" parameter do in shrink_slab()? Looks
> like its only role is as a divison factor in a complex calculation of
> pages to be scanned.

Yes. But I think it can make others confuse like this.

Except zone_reclaim, lru_pages had been used for balancing slab
reclaim VS page reclaim.
So lru_page naming is a good.

But in 0ff38490, you observed rather corner case.
AFAIU with your description, you wanted to shrink slabs until
unsuccessful or reached the limit.
So you intentionally passed order instead of the number of lru pages
for shrinking many slabs as possible as.

So at least, we need some comment to prevent confusion.

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9c7e57c..5523eae 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2626,6 +2626,9 @@ static int __zone_reclaim(struct zone *zone,
gfp_t gfp_mask, unsigned int order)
                 *
                 * Note that shrink_slab will free memory on all zones and may
                 * take a long time.
+                *
+                * We pass order instead of lru_pages for shrinking slab
+                * as much as possible.
                 */
                while (shrink_slab(sc.nr_scanned, gfp_mask, order) &&
                        zone_page_state(zone, NR_SLAB_RECLAIMABLE) >


>
> do_try_to_free_pages passes 0 as "lru_pages" to shrink_slab() when trying
> to do cgroup lru scans. Why is that?
>

memcg doesn't call shrink_slab.


-- 
Kind regards,
Minchan Kim

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages, not page order
@ 2010-06-27  1:03     ` Minchan Kim
  0 siblings, 0 replies; 18+ messages in thread
From: Minchan Kim @ 2010-06-27  1:03 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: KOSAKI Motohiro, LKML, linux-mm, Andrew Morton, Mel Gorman,
	Rik van Riel, Johannes Weiner

On Fri, Jun 25, 2010 at 11:07 PM, Christoph Lameter
<cl@linux-foundation.org> wrote:
> On Fri, 25 Jun 2010, KOSAKI Motohiro wrote:
>
>> Fix simple argument error. Usually 'order' is very small value than
>> lru_pages. then it can makes unnecessary icache dropping.
>
> This is going to reduce the delta that is added to shrinker->nr
> significantly thereby increasing the number of times that shrink_slab() is
> called.
>
> What does the "lru_pages" parameter do in shrink_slab()? Looks
> like its only role is as a divison factor in a complex calculation of
> pages to be scanned.

Yes. But I think it can make others confuse like this.

Except zone_reclaim, lru_pages had been used for balancing slab
reclaim VS page reclaim.
So lru_page naming is a good.

But in 0ff38490, you observed rather corner case.
AFAIU with your description, you wanted to shrink slabs until
unsuccessful or reached the limit.
So you intentionally passed order instead of the number of lru pages
for shrinking many slabs as possible as.

So at least, we need some comment to prevent confusion.

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9c7e57c..5523eae 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2626,6 +2626,9 @@ static int __zone_reclaim(struct zone *zone,
gfp_t gfp_mask, unsigned int order)
                 *
                 * Note that shrink_slab will free memory on all zones and may
                 * take a long time.
+                *
+                * We pass order instead of lru_pages for shrinking slab
+                * as much as possible.
                 */
                while (shrink_slab(sc.nr_scanned, gfp_mask, order) &&
                        zone_page_state(zone, NR_SLAB_RECLAIMABLE) >


>
> do_try_to_free_pages passes 0 as "lru_pages" to shrink_slab() when trying
> to do cgroup lru scans. Why is that?
>

memcg doesn't call shrink_slab.


-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages, not page order
  2010-06-25 14:07   ` Christoph Lameter
@ 2010-06-28  1:32     ` KOSAKI Motohiro
  -1 siblings, 0 replies; 18+ messages in thread
From: KOSAKI Motohiro @ 2010-06-28  1:32 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: kosaki.motohiro, LKML, linux-mm, Andrew Morton, Mel Gorman,
	Rik van Riel, Minchan Kim, Johannes Weiner

> On Fri, 25 Jun 2010, KOSAKI Motohiro wrote:
> 
> > Fix simple argument error. Usually 'order' is very small value than
> > lru_pages. then it can makes unnecessary icache dropping.
> 
> This is going to reduce the delta that is added to shrinker->nr
> significantly thereby increasing the number of times that shrink_slab() is
> called.

Yup. But,

Smaller shrink -> only makes retry
Bigger shrink  -> makes unnecessary icache/dcache drop. it can bring
                  mysterious low performance.


> What does the "lru_pages" parameter do in shrink_slab()? Looks
> like its only role is as a divison factor in a complex calculation of
> pages to be scanned.

Yes.
scanned/lru_pages ratio define basic shrink_slab puressure strength.

So, If you intentionally need bigger slab pressure, bigger scanned parameter
passing is better rather than mysterious 'order' parameter.

> 
> do_try_to_free_pages passes 0 as "lru_pages" to shrink_slab() when trying
> to do cgroup lru scans. Why is that?

?
When cgroup lru scans, do_try_to_free_pages() don't call shrink_slab().



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages, not page order
@ 2010-06-28  1:32     ` KOSAKI Motohiro
  0 siblings, 0 replies; 18+ messages in thread
From: KOSAKI Motohiro @ 2010-06-28  1:32 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: kosaki.motohiro, LKML, linux-mm, Andrew Morton, Mel Gorman,
	Rik van Riel, Minchan Kim, Johannes Weiner

> On Fri, 25 Jun 2010, KOSAKI Motohiro wrote:
> 
> > Fix simple argument error. Usually 'order' is very small value than
> > lru_pages. then it can makes unnecessary icache dropping.
> 
> This is going to reduce the delta that is added to shrinker->nr
> significantly thereby increasing the number of times that shrink_slab() is
> called.

Yup. But,

Smaller shrink -> only makes retry
Bigger shrink  -> makes unnecessary icache/dcache drop. it can bring
                  mysterious low performance.


> What does the "lru_pages" parameter do in shrink_slab()? Looks
> like its only role is as a divison factor in a complex calculation of
> pages to be scanned.

Yes.
scanned/lru_pages ratio define basic shrink_slab puressure strength.

So, If you intentionally need bigger slab pressure, bigger scanned parameter
passing is better rather than mysterious 'order' parameter.

> 
> do_try_to_free_pages passes 0 as "lru_pages" to shrink_slab() when trying
> to do cgroup lru scans. Why is that?

?
When cgroup lru scans, do_try_to_free_pages() don't call shrink_slab().


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] vmscan: don't subtraction of unsined
  2010-06-25 14:17     ` Christoph Lameter
@ 2010-06-28  2:26       ` KOSAKI Motohiro
  -1 siblings, 0 replies; 18+ messages in thread
From: KOSAKI Motohiro @ 2010-06-28  2:26 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: kosaki.motohiro, LKML, linux-mm, Andrew Morton, Mel Gorman,
	Rik van Riel, Minchan Kim, Johannes Weiner

> On Fri, 25 Jun 2010, KOSAKI Motohiro wrote:
> 
> > 'slab_reclaimable' and 'nr_pages' are unsigned. so, subtraction is
> > unsafe.
> 
> Why? We are subtracting the current value of NR_SLAB_RECLAIMABLE from the
> earlier one. The result can be negative (maybe concurrent allocations) and
> then the nr_reclaimed gets decremented instead. This is  okay since we
> have not reached our goal then of reducing the number of reclaimable slab
> pages on the zone.

It's unsigned. negative mean very big value. so

"zone_page_state(zone, NR_SLAB_RECLAIMABLE) > slab_reclaimable - nr_pages)" will
be evaluated false.

ok, your mysterious 'order' parameter (as pointed [1/2]) almostly prevent this case.
because passing 'order' makes very heavy slab pressure and it avoid negative occur.

but unnaturall coding style can make confusing to reviewers. ya, it's not
big issue. but I also don't find no fixing reason.


> 
> > @@ -2622,17 +2624,21 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> >  		 * Note that shrink_slab will free memory on all zones and may
> >  		 * take a long time.
> >  		 */
> > -		while (shrink_slab(sc.nr_scanned, gfp_mask, lru_pages) &&
> > -			zone_page_state(zone, NR_SLAB_RECLAIMABLE) >
> > -				slab_reclaimable - nr_pages)
> 
> The comparison could be a problem here. So
> 
> 			zone_page_state(zone, NR_SLAB_RECLAIMABLE) + nr_pages >
> 				slab_reclaimable
> 
> ?

My patch take the same thing. but It avoided two line comparision.
Do you mean you like this style? (personally, I don't). If so, I'll 
repost this patch.






^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] vmscan: don't subtraction of unsined
@ 2010-06-28  2:26       ` KOSAKI Motohiro
  0 siblings, 0 replies; 18+ messages in thread
From: KOSAKI Motohiro @ 2010-06-28  2:26 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: kosaki.motohiro, LKML, linux-mm, Andrew Morton, Mel Gorman,
	Rik van Riel, Minchan Kim, Johannes Weiner

> On Fri, 25 Jun 2010, KOSAKI Motohiro wrote:
> 
> > 'slab_reclaimable' and 'nr_pages' are unsigned. so, subtraction is
> > unsafe.
> 
> Why? We are subtracting the current value of NR_SLAB_RECLAIMABLE from the
> earlier one. The result can be negative (maybe concurrent allocations) and
> then the nr_reclaimed gets decremented instead. This is  okay since we
> have not reached our goal then of reducing the number of reclaimable slab
> pages on the zone.

It's unsigned. negative mean very big value. so

"zone_page_state(zone, NR_SLAB_RECLAIMABLE) > slab_reclaimable - nr_pages)" will
be evaluated false.

ok, your mysterious 'order' parameter (as pointed [1/2]) almostly prevent this case.
because passing 'order' makes very heavy slab pressure and it avoid negative occur.

but unnaturall coding style can make confusing to reviewers. ya, it's not
big issue. but I also don't find no fixing reason.


> 
> > @@ -2622,17 +2624,21 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> >  		 * Note that shrink_slab will free memory on all zones and may
> >  		 * take a long time.
> >  		 */
> > -		while (shrink_slab(sc.nr_scanned, gfp_mask, lru_pages) &&
> > -			zone_page_state(zone, NR_SLAB_RECLAIMABLE) >
> > -				slab_reclaimable - nr_pages)
> 
> The comparison could be a problem here. So
> 
> 			zone_page_state(zone, NR_SLAB_RECLAIMABLE) + nr_pages >
> 				slab_reclaimable
> 
> ?

My patch take the same thing. but It avoided two line comparision.
Do you mean you like this style? (personally, I don't). If so, I'll 
repost this patch.





--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] vmscan: don't subtraction of unsined
  2010-06-28  2:26       ` KOSAKI Motohiro
@ 2010-06-29 15:14         ` Christoph Lameter
  -1 siblings, 0 replies; 18+ messages in thread
From: Christoph Lameter @ 2010-06-29 15:14 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: LKML, linux-mm, Andrew Morton, Mel Gorman, Rik van Riel,
	Minchan Kim, Johannes Weiner

On Mon, 28 Jun 2010, KOSAKI Motohiro wrote:

> It's unsigned. negative mean very big value. so
>
> "zone_page_state(zone, NR_SLAB_RECLAIMABLE) > slab_reclaimable - nr_pages)" will
> be evaluated false.

There were some suggestions on how to address this later in the patch.

> ok, your mysterious 'order' parameter (as pointed [1/2]) almostly prevent this case.
> because passing 'order' makes very heavy slab pressure and it avoid negative occur.
>
> but unnaturall coding style can make confusing to reviewers. ya, it's not
> big issue. but I also don't find no fixing reason.

This is not a coding issue but one of logic. The order parameter is
mysterious to me too. So is the lru_pages logic.

> > The comparison could be a problem here. So
> >
> > 			zone_page_state(zone, NR_SLAB_RECLAIMABLE) + nr_pages >
> > 				slab_reclaimable
> >
> > ?
>
> My patch take the same thing. but It avoided two line comparision.
> Do you mean you like this style? (personally, I don't). If so, I'll
> repost this patch.

Yes. I also do not like long cryptic names for local variables.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] vmscan: don't subtraction of unsined
@ 2010-06-29 15:14         ` Christoph Lameter
  0 siblings, 0 replies; 18+ messages in thread
From: Christoph Lameter @ 2010-06-29 15:14 UTC (permalink / raw)
  To: KOSAKI Motohiro
  Cc: LKML, linux-mm, Andrew Morton, Mel Gorman, Rik van Riel,
	Minchan Kim, Johannes Weiner

On Mon, 28 Jun 2010, KOSAKI Motohiro wrote:

> It's unsigned. negative mean very big value. so
>
> "zone_page_state(zone, NR_SLAB_RECLAIMABLE) > slab_reclaimable - nr_pages)" will
> be evaluated false.

There were some suggestions on how to address this later in the patch.

> ok, your mysterious 'order' parameter (as pointed [1/2]) almostly prevent this case.
> because passing 'order' makes very heavy slab pressure and it avoid negative occur.
>
> but unnaturall coding style can make confusing to reviewers. ya, it's not
> big issue. but I also don't find no fixing reason.

This is not a coding issue but one of logic. The order parameter is
mysterious to me too. So is the lru_pages logic.

> > The comparison could be a problem here. So
> >
> > 			zone_page_state(zone, NR_SLAB_RECLAIMABLE) + nr_pages >
> > 				slab_reclaimable
> >
> > ?
>
> My patch take the same thing. but It avoided two line comparision.
> Do you mean you like this style? (personally, I don't). If so, I'll
> repost this patch.

Yes. I also do not like long cryptic names for local variables.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages, not page order
  2010-06-27  1:03     ` Minchan Kim
@ 2010-06-29 15:19       ` Christoph Lameter
  -1 siblings, 0 replies; 18+ messages in thread
From: Christoph Lameter @ 2010-06-29 15:19 UTC (permalink / raw)
  To: Minchan Kim
  Cc: KOSAKI Motohiro, LKML, linux-mm, Andrew Morton, Mel Gorman,
	Rik van Riel, Johannes Weiner

On Sun, 27 Jun 2010, Minchan Kim wrote:

> > What does the "lru_pages" parameter do in shrink_slab()? Looks
> > like its only role is as a divison factor in a complex calculation of
> > pages to be scanned.
>
> Yes. But I think it can make others confuse like this.

Right.

> Except zone_reclaim, lru_pages had been used for balancing slab
> reclaim VS page reclaim.
> So lru_page naming is a good.

It is also good to make zone reclaim more deterministic by using the new
counters. So I am not all opposed to the initial patch. Just clear things
up a bit and make sure that this does not cause regressions because of too
frequent calls to shrink_slab

> So you intentionally passed order instead of the number of lru pages
> for shrinking many slabs as possible as.

Dont remember doing that. I suspect the parameter was renamed at some
point.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages, not page order
@ 2010-06-29 15:19       ` Christoph Lameter
  0 siblings, 0 replies; 18+ messages in thread
From: Christoph Lameter @ 2010-06-29 15:19 UTC (permalink / raw)
  To: Minchan Kim
  Cc: KOSAKI Motohiro, LKML, linux-mm, Andrew Morton, Mel Gorman,
	Rik van Riel, Johannes Weiner

On Sun, 27 Jun 2010, Minchan Kim wrote:

> > What does the "lru_pages" parameter do in shrink_slab()? Looks
> > like its only role is as a divison factor in a complex calculation of
> > pages to be scanned.
>
> Yes. But I think it can make others confuse like this.

Right.

> Except zone_reclaim, lru_pages had been used for balancing slab
> reclaim VS page reclaim.
> So lru_page naming is a good.

It is also good to make zone reclaim more deterministic by using the new
counters. So I am not all opposed to the initial patch. Just clear things
up a bit and make sure that this does not cause regressions because of too
frequent calls to shrink_slab

> So you intentionally passed order instead of the number of lru pages
> for shrinking many slabs as possible as.

Dont remember doing that. I suspect the parameter was renamed at some
point.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2010-06-29 15:20 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-06-25 11:21 [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages, not page order KOSAKI Motohiro
2010-06-25 11:21 ` KOSAKI Motohiro
2010-06-25 11:23 ` [PATCH 2/2] vmscan: don't subtraction of unsined KOSAKI Motohiro
2010-06-25 11:23   ` KOSAKI Motohiro
2010-06-25 14:17   ` Christoph Lameter
2010-06-25 14:17     ` Christoph Lameter
2010-06-28  2:26     ` KOSAKI Motohiro
2010-06-28  2:26       ` KOSAKI Motohiro
2010-06-29 15:14       ` Christoph Lameter
2010-06-29 15:14         ` Christoph Lameter
2010-06-25 14:07 ` [PATCH 1/2] vmscan: shrink_slab() require number of lru_pages, not page order Christoph Lameter
2010-06-25 14:07   ` Christoph Lameter
2010-06-27  1:03   ` Minchan Kim
2010-06-27  1:03     ` Minchan Kim
2010-06-29 15:19     ` Christoph Lameter
2010-06-29 15:19       ` Christoph Lameter
2010-06-28  1:32   ` KOSAKI Motohiro
2010-06-28  1:32     ` KOSAKI Motohiro

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.