All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-18  1:01 ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-18  1:01 UTC (permalink / raw)
  To: linux-fsdevel, viro, linux-mm, linux-ext4, kernel-janitors,
	a.gruenbacher

Hi,

The comment for struct shrinker in include/linux/mm.h says
"shrink...It should return the number of objects which remain in the
cache."
Please notice the word "remain".

In fs/mbcache.h, mb_cache_shrink_fn is used as the shrink function:
 	static struct shrinker mb_cache_shrinker = {	
 		.shrink = mb_cache_shrink_fn,
 		.seeks = DEFAULT_SEEKS,
 	};
In mb_cache_shrink_fn, the return value for nr_to_scan > 0 is the
number of mb_cache_entry before shrink operation. It may because the
memory usage for mbcache is low, so the effect is not so obvious.
I think we'd better fix the return value issue.

Following patch is against 2.6.35-rc5. Please check it.

Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..412e7cc 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -228,6 +228,16 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
 						   e_lru_list), gfp_mask);
 	}
+	spin_lock(&mb_cache_spinlock);
+	count = 0;
+	list_for_each(l, &mb_cache_list) {
+		struct mb_cache *cache =
+			list_entry(l, struct mb_cache, c_cache_list);
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
 out:
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
-- 
1.7.1.1





-- 
Thanks and Regards,
shenghui

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan >
@ 2010-07-18  1:01 ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-18  1:01 UTC (permalink / raw)
  To: linux-fsdevel, viro, linux-mm, linux-ext4, kernel-janitors,
	a.gruenbacher

Hi,

The comment for struct shrinker in include/linux/mm.h says
"shrink...It should return the number of objects which remain in the
cache."
Please notice the word "remain".

In fs/mbcache.h, mb_cache_shrink_fn is used as the shrink function:
 	static struct shrinker mb_cache_shrinker = {	
 		.shrink = mb_cache_shrink_fn,
 		.seeks = DEFAULT_SEEKS,
 	};
In mb_cache_shrink_fn, the return value for nr_to_scan > 0 is the
number of mb_cache_entry before shrink operation. It may because the
memory usage for mbcache is low, so the effect is not so obvious.
I think we'd better fix the return value issue.

Following patch is against 2.6.35-rc5. Please check it.

Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..412e7cc 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -228,6 +228,16 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
 						   e_lru_list), gfp_mask);
 	}
+	spin_lock(&mb_cache_spinlock);
+	count = 0;
+	list_for_each(l, &mb_cache_list) {
+		struct mb_cache *cache +			list_entry(l, struct mb_cache, c_cache_list);
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
 out:
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
-- 
1.7.1.1





-- 
Thanks and Regards,
shenghui

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-18  1:01 ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-18  1:01 UTC (permalink / raw)
  To: linux-fsdevel, viro, linux-mm, linux-ext4, kernel-janitors,
	a.gruenbacher

Hi,

The comment for struct shrinker in include/linux/mm.h says
"shrink...It should return the number of objects which remain in the
cache."
Please notice the word "remain".

In fs/mbcache.h, mb_cache_shrink_fn is used as the shrink function:
 	static struct shrinker mb_cache_shrinker = {	
 		.shrink = mb_cache_shrink_fn,
 		.seeks = DEFAULT_SEEKS,
 	};
In mb_cache_shrink_fn, the return value for nr_to_scan > 0 is the
number of mb_cache_entry before shrink operation. It may because the
memory usage for mbcache is low, so the effect is not so obvious.
I think we'd better fix the return value issue.

Following patch is against 2.6.35-rc5. Please check it.

Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..412e7cc 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -228,6 +228,16 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
 						   e_lru_list), gfp_mask);
 	}
+	spin_lock(&mb_cache_spinlock);
+	count = 0;
+	list_for_each(l, &mb_cache_list) {
+		struct mb_cache *cache =
+			list_entry(l, struct mb_cache, c_cache_list);
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
 out:
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
-- 
1.7.1.1





-- 
Thanks and Regards,
shenghui

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
  2010-07-18  1:01 ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > Wang Sheng-Hui
  (?)
@ 2010-07-18  4:06   ` Eric Sandeen
  -1 siblings, 0 replies; 55+ messages in thread
From: Eric Sandeen @ 2010-07-18  4:06 UTC (permalink / raw)
  To: Wang Sheng-Hui
  Cc: linux-fsdevel, viro, linux-mm, linux-ext4, kernel-janitors,
	a.gruenbacher

Wang Sheng-Hui wrote:

> Hi,
>
> The comment for struct shrinker in include/linux/mm.h says
> "shrink...It should return the number of objects which remain in the
> cache."
> Please notice the word "remain".
>
> In fs/mbcache.h, mb_cache_shrink_fn is used as the shrink function:
>  	static struct shrinker mb_cache_shrinker = {	
>  		.shrink = mb_cache_shrink_fn,
>  		.seeks = DEFAULT_SEEKS,
>  	};
> In mb_cache_shrink_fn, the return value for nr_to_scan > 0 is the
> number of mb_cache_entry before shrink operation. It may because the
> memory usage for mbcache is low, so the effect is not so obvious.
> I think we'd better fix the return value issue.
>
> Following patch is against 2.6.35-rc5. Please check it.
>
>   
you are right that it's not returning the remaining entries, but I think
we can do this more simply; there isn't any reason to calculate it twice
How about just moving the accounting to the end, since "count" isn't actually
used when freeing, anyway.... something like this?

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..3af79de 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -203,19 +203,11 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 	struct list_head *l, *ltmp;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
 		goto out;
-	}
+
+	spin_lock &mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -229,6 +221,17 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 						   e_lru_list), gfp_mask);
 	}
 out:
+	/* Count remaining entries */
+	spin_lock(&mb_cache_spinlock);
+	list_for_each(l, &mb_cache_list) {
+		struct mb_cache *cache =
+			list_entry(l, struct mb_cache, c_cache_list);
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan
@ 2010-07-18  4:06   ` Eric Sandeen
  0 siblings, 0 replies; 55+ messages in thread
From: Eric Sandeen @ 2010-07-18  4:06 UTC (permalink / raw)
  To: Wang Sheng-Hui
  Cc: linux-fsdevel, viro, linux-mm, linux-ext4, kernel-janitors,
	a.gruenbacher

Wang Sheng-Hui wrote:

> Hi,
>
> The comment for struct shrinker in include/linux/mm.h says
> "shrink...It should return the number of objects which remain in the
> cache."
> Please notice the word "remain".
>
> In fs/mbcache.h, mb_cache_shrink_fn is used as the shrink function:
>  	static struct shrinker mb_cache_shrinker = {	
>  		.shrink = mb_cache_shrink_fn,
>  		.seeks = DEFAULT_SEEKS,
>  	};
> In mb_cache_shrink_fn, the return value for nr_to_scan > 0 is the
> number of mb_cache_entry before shrink operation. It may because the
> memory usage for mbcache is low, so the effect is not so obvious.
> I think we'd better fix the return value issue.
>
> Following patch is against 2.6.35-rc5. Please check it.
>
>   
you are right that it's not returning the remaining entries, but I think
we can do this more simply; there isn't any reason to calculate it twice
How about just moving the accounting to the end, since "count" isn't actually
used when freeing, anyway.... something like this?

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..3af79de 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -203,19 +203,11 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 	struct list_head *l, *ltmp;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache -			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan = 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan = 0)
 		goto out;
-	}
+
+	spin_lock &mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce  			list_entry(mb_cache_lru_list.next,
@@ -229,6 +221,17 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 						   e_lru_list), gfp_mask);
 	}
 out:
+	/* Count remaining entries */
+	spin_lock(&mb_cache_spinlock);
+	list_for_each(l, &mb_cache_list) {
+		struct mb_cache *cache +			list_entry(l, struct mb_cache, c_cache_list);
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-18  4:06   ` Eric Sandeen
  0 siblings, 0 replies; 55+ messages in thread
From: Eric Sandeen @ 2010-07-18  4:06 UTC (permalink / raw)
  To: Wang Sheng-Hui
  Cc: linux-fsdevel, viro, linux-mm, linux-ext4, kernel-janitors,
	a.gruenbacher

Wang Sheng-Hui wrote:

> Hi,
>
> The comment for struct shrinker in include/linux/mm.h says
> "shrink...It should return the number of objects which remain in the
> cache."
> Please notice the word "remain".
>
> In fs/mbcache.h, mb_cache_shrink_fn is used as the shrink function:
>  	static struct shrinker mb_cache_shrinker = {	
>  		.shrink = mb_cache_shrink_fn,
>  		.seeks = DEFAULT_SEEKS,
>  	};
> In mb_cache_shrink_fn, the return value for nr_to_scan > 0 is the
> number of mb_cache_entry before shrink operation. It may because the
> memory usage for mbcache is low, so the effect is not so obvious.
> I think we'd better fix the return value issue.
>
> Following patch is against 2.6.35-rc5. Please check it.
>
>   
you are right that it's not returning the remaining entries, but I think
we can do this more simply; there isn't any reason to calculate it twice
How about just moving the accounting to the end, since "count" isn't actually
used when freeing, anyway.... something like this?

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..3af79de 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -203,19 +203,11 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 	struct list_head *l, *ltmp;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
 		goto out;
-	}
+
+	spin_lock &mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -229,6 +221,17 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 						   e_lru_list), gfp_mask);
 	}
 out:
+	/* Count remaining entries */
+	spin_lock(&mb_cache_spinlock);
+	list_for_each(l, &mb_cache_list) {
+		struct mb_cache *cache =
+			list_entry(l, struct mb_cache, c_cache_list);
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
  2010-07-18  4:06   ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan Eric Sandeen
@ 2010-07-18  6:01     ` Christoph Hellwig
  -1 siblings, 0 replies; 55+ messages in thread
From: Christoph Hellwig @ 2010-07-18  6:01 UTC (permalink / raw)
  To: Eric Sandeen
  Cc: Wang Sheng-Hui, linux-fsdevel, viro, linux-mm, linux-ext4,
	kernel-janitors, a.gruenbacher

On Sat, Jul 17, 2010 at 11:06:32PM -0500, Eric Sandeen wrote:
> +	/* Count remaining entries */
> +	spin_lock(&mb_cache_spinlock);
> +	list_for_each(l, &mb_cache_list) {
> +		struct mb_cache *cache =
> +			list_entry(l, struct mb_cache, c_cache_list);

This should be using list_for_each_entry.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan
@ 2010-07-18  6:01     ` Christoph Hellwig
  0 siblings, 0 replies; 55+ messages in thread
From: Christoph Hellwig @ 2010-07-18  6:01 UTC (permalink / raw)
  To: Eric Sandeen
  Cc: Wang Sheng-Hui, linux-fsdevel, viro, linux-mm, linux-ext4,
	kernel-janitors, a.gruenbacher

On Sat, Jul 17, 2010 at 11:06:32PM -0500, Eric Sandeen wrote:
> +	/* Count remaining entries */
> +	spin_lock(&mb_cache_spinlock);
> +	list_for_each(l, &mb_cache_list) {
> +		struct mb_cache *cache > +			list_entry(l, struct mb_cache, c_cache_list);

This should be using list_for_each_entry.


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
  2010-07-18  6:01     ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan Christoph Hellwig
  (?)
@ 2010-07-18  6:36       ` Wang Sheng-Hui
  -1 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-18  6:36 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Eric Sandeen, linux-fsdevel, viro, linux-mm, linux-ext4,
	kernel-janitors, a.gruenbacher

于 2010-7-18 14:01, Christoph Hellwig 写道:
> On Sat, Jul 17, 2010 at 11:06:32PM -0500, Eric Sandeen wrote:
>> +	/* Count remaining entries */
>> +	spin_lock(&mb_cache_spinlock);
>> +	list_for_each(l,&mb_cache_list) {
>> +		struct mb_cache *cache =
>> +			list_entry(l, struct mb_cache, c_cache_list);
>
> This should be using list_for_each_entry.
>

I regenerated the patch. Please check it.

Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
  fs/mbcache.c |   22 +++++++++++-----------
  1 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..5697d9e 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,13 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
  {
  	LIST_HEAD(free_list);
  	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
  	int count = 0;

-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
  	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
  		goto out;
-	}
+
  	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
  		struct mb_cache_entry *ce =
  			list_entry(mb_cache_lru_list.next,
@@ -229,6 +221,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
  						   e_lru_list), gfp_mask);
  	}
  out:
+	spin_lock(&mb_cache_spinlock);
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
  	return (count / 100) * sysctl_vfs_cache_pressure;
  }

-- 
1.7.1.1



-- 
Thanks and Regards,
shenghui

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan
@ 2010-07-18  6:36       ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-18  6:36 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Eric Sandeen, linux-fsdevel, viro, linux-mm, linux-ext4,
	kernel-janitors, a.gruenbacher

于 2010-7-18 14:01, Christoph Hellwig 写道:
> On Sat, Jul 17, 2010 at 11:06:32PM -0500, Eric Sandeen wrote:
>> +	/* Count remaining entries */
>> +	spin_lock(&mb_cache_spinlock);
>> +	list_for_each(l,&mb_cache_list) {
>> +		struct mb_cache *cache >> +			list_entry(l, struct mb_cache, c_cache_list);
>
> This should be using list_for_each_entry.
>

I regenerated the patch. Please check it.

Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
  fs/mbcache.c |   22 +++++++++++-----------
  1 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..5697d9e 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,13 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
  {
  	LIST_HEAD(free_list);
  	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
  	int count = 0;

-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache -			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
  	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan = 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan = 0)
  		goto out;
-	}
+
  	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
  		struct mb_cache_entry *ce   			list_entry(mb_cache_lru_list.next,
@@ -229,6 +221,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
  						   e_lru_list), gfp_mask);
  	}
  out:
+	spin_lock(&mb_cache_spinlock);
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
  	return (count / 100) * sysctl_vfs_cache_pressure;
  }

-- 
1.7.1.1



-- 
Thanks and Regards,
shenghui

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-18  6:36       ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-18  6:36 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Eric Sandeen, linux-fsdevel, viro, linux-mm, linux-ext4,
	kernel-janitors, a.gruenbacher

ao? 2010-7-18 14:01, Christoph Hellwig a??e??:
> On Sat, Jul 17, 2010 at 11:06:32PM -0500, Eric Sandeen wrote:
>> +	/* Count remaining entries */
>> +	spin_lock(&mb_cache_spinlock);
>> +	list_for_each(l,&mb_cache_list) {
>> +		struct mb_cache *cache =
>> +			list_entry(l, struct mb_cache, c_cache_list);
>
> This should be using list_for_each_entry.
>

I regenerated the patch. Please check it.

Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
  fs/mbcache.c |   22 +++++++++++-----------
  1 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..5697d9e 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,13 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
  {
  	LIST_HEAD(free_list);
  	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
  	int count = 0;

-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
  	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
  		goto out;
-	}
+
  	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
  		struct mb_cache_entry *ce =
  			list_entry(mb_cache_lru_list.next,
@@ -229,6 +221,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
  						   e_lru_list), gfp_mask);
  	}
  out:
+	spin_lock(&mb_cache_spinlock);
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
  	return (count / 100) * sysctl_vfs_cache_pressure;
  }

-- 
1.7.1.1



-- 
Thanks and Regards,
shenghui

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 1/2] mbcache: Remove unused features
  2010-07-21 14:00   ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Eric Sandeen
  (?)
@ 2010-07-19 16:19     ` Andreas Gruenbacher
  -1 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-19 16:19 UTC (permalink / raw)
  To: Alexander Viro
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

The mbcache code was written to support a variable number of indexes,
but all the existing users use exactly one index.  Simplify to code to
support only that case.

There are also no users of the cache entry free operation, and none of
the users keep extra data in cache entries.  Remove those features as
well.

Signed-off-by: Andreas Gruenbacher <agruen@suse.de>
---
 fs/ext2/xattr.c         |   12 ++--
 fs/ext3/xattr.c         |   12 ++--
 fs/ext4/xattr.c         |   12 ++--
 fs/mbcache.c            |  141 +++++++++++++---------------------------------
 include/linux/mbcache.h |   20 ++-----
 5 files changed, 60 insertions(+), 137 deletions(-)

diff --git a/fs/ext2/xattr.c b/fs/ext2/xattr.c
index 7c39157..af59915 100644
--- a/fs/ext2/xattr.c
+++ b/fs/ext2/xattr.c
@@ -838,7 +838,7 @@ ext2_xattr_cache_insert(struct buffer_head *bh)
 	ce = mb_cache_entry_alloc(ext2_xattr_cache, GFP_NOFS);
 	if (!ce)
 		return -ENOMEM;
-	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, &hash);
+	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, hash);
 	if (error) {
 		mb_cache_entry_free(ce);
 		if (error == -EBUSY) {
@@ -912,8 +912,8 @@ ext2_xattr_cache_find(struct inode *inode, struct ext2_xattr_header *header)
 		return NULL;  /* never share */
 	ea_idebug(inode, "looking for cached blocks [%x]", (int)hash);
 again:
-	ce = mb_cache_entry_find_first(ext2_xattr_cache, 0,
-				       inode->i_sb->s_bdev, hash);
+	ce = mb_cache_entry_find_first(ext2_xattr_cache, inode->i_sb->s_bdev,
+				       hash);
 	while (ce) {
 		struct buffer_head *bh;
 
@@ -945,7 +945,7 @@ again:
 			unlock_buffer(bh);
 			brelse(bh);
 		}
-		ce = mb_cache_entry_find_next(ce, 0, inode->i_sb->s_bdev, hash);
+		ce = mb_cache_entry_find_next(ce, inode->i_sb->s_bdev, hash);
 	}
 	return NULL;
 }
@@ -1021,9 +1021,7 @@ static void ext2_xattr_rehash(struct ext2_xattr_header *header,
 int __init
 init_ext2_xattr(void)
 {
-	ext2_xattr_cache = mb_cache_create("ext2_xattr", NULL,
-		sizeof(struct mb_cache_entry) +
-		sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]), 1, 6);
+	ext2_xattr_cache = mb_cache_create("ext2_xattr", 6);
 	if (!ext2_xattr_cache)
 		return -ENOMEM;
 	return 0;
diff --git a/fs/ext3/xattr.c b/fs/ext3/xattr.c
index 71fb8d6..e69dc6d 100644
--- a/fs/ext3/xattr.c
+++ b/fs/ext3/xattr.c
@@ -1139,7 +1139,7 @@ ext3_xattr_cache_insert(struct buffer_head *bh)
 		ea_bdebug(bh, "out of memory");
 		return;
 	}
-	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, &hash);
+	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, hash);
 	if (error) {
 		mb_cache_entry_free(ce);
 		if (error == -EBUSY) {
@@ -1211,8 +1211,8 @@ ext3_xattr_cache_find(struct inode *inode, struct ext3_xattr_header *header,
 		return NULL;  /* never share */
 	ea_idebug(inode, "looking for cached blocks [%x]", (int)hash);
 again:
-	ce = mb_cache_entry_find_first(ext3_xattr_cache, 0,
-				       inode->i_sb->s_bdev, hash);
+	ce = mb_cache_entry_find_first(ext3_xattr_cache, inode->i_sb->s_bdev,
+				       hash);
 	while (ce) {
 		struct buffer_head *bh;
 
@@ -1237,7 +1237,7 @@ again:
 			return bh;
 		}
 		brelse(bh);
-		ce = mb_cache_entry_find_next(ce, 0, inode->i_sb->s_bdev, hash);
+		ce = mb_cache_entry_find_next(ce, inode->i_sb->s_bdev, hash);
 	}
 	return NULL;
 }
@@ -1313,9 +1313,7 @@ static void ext3_xattr_rehash(struct ext3_xattr_header *header,
 int __init
 init_ext3_xattr(void)
 {
-	ext3_xattr_cache = mb_cache_create("ext3_xattr", NULL,
-		sizeof(struct mb_cache_entry) +
-		sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]), 1, 6);
+	ext3_xattr_cache = mb_cache_create("ext3_xattr", 6);
 	if (!ext3_xattr_cache)
 		return -ENOMEM;
 	return 0;
diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
index 0433800..1c93198 100644
--- a/fs/ext4/xattr.c
+++ b/fs/ext4/xattr.c
@@ -1418,7 +1418,7 @@ ext4_xattr_cache_insert(struct buffer_head *bh)
 		ea_bdebug(bh, "out of memory");
 		return;
 	}
-	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, &hash);
+	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, hash);
 	if (error) {
 		mb_cache_entry_free(ce);
 		if (error == -EBUSY) {
@@ -1490,8 +1490,8 @@ ext4_xattr_cache_find(struct inode *inode, struct ext4_xattr_header *header,
 		return NULL;  /* never share */
 	ea_idebug(inode, "looking for cached blocks [%x]", (int)hash);
 again:
-	ce = mb_cache_entry_find_first(ext4_xattr_cache, 0,
-				       inode->i_sb->s_bdev, hash);
+	ce = mb_cache_entry_find_first(ext4_xattr_cache, inode->i_sb->s_bdev,
+				       hash);
 	while (ce) {
 		struct buffer_head *bh;
 
@@ -1515,7 +1515,7 @@ again:
 			return bh;
 		}
 		brelse(bh);
-		ce = mb_cache_entry_find_next(ce, 0, inode->i_sb->s_bdev, hash);
+		ce = mb_cache_entry_find_next(ce, inode->i_sb->s_bdev, hash);
 	}
 	return NULL;
 }
@@ -1591,9 +1591,7 @@ static void ext4_xattr_rehash(struct ext4_xattr_header *header,
 int __init
 init_ext4_xattr(void)
 {
-	ext4_xattr_cache = mb_cache_create("ext4_xattr", NULL,
-		sizeof(struct mb_cache_entry) +
-		sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]), 1, 6);
+	ext4_xattr_cache = mb_cache_create("ext4_xattr", 6);
 	if (!ext4_xattr_cache)
 		return -ENOMEM;
 	return 0;
diff --git a/fs/mbcache.c b/fs/mbcache.c
index e28f21b..8a2cbd8 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -79,15 +79,11 @@ EXPORT_SYMBOL(mb_cache_entry_find_next);
 struct mb_cache {
 	struct list_head		c_cache_list;
 	const char			*c_name;
-	struct mb_cache_op		c_op;
 	atomic_t			c_entry_count;
 	int				c_bucket_bits;
-#ifndef MB_CACHE_INDEXES_COUNT
-	int				c_indexes_count;
-#endif
-	struct kmem_cache			*c_entry_cache;
+	struct kmem_cache		*c_entry_cache;
 	struct list_head		*c_block_hash;
-	struct list_head		*c_indexes_hash[0];
+	struct list_head		*c_index_hash;
 };
 
 
@@ -101,16 +97,6 @@ static LIST_HEAD(mb_cache_list);
 static LIST_HEAD(mb_cache_lru_list);
 static DEFINE_SPINLOCK(mb_cache_spinlock);
 
-static inline int
-mb_cache_indexes(struct mb_cache *cache)
-{
-#ifdef MB_CACHE_INDEXES_COUNT
-	return MB_CACHE_INDEXES_COUNT;
-#else
-	return cache->c_indexes_count;
-#endif
-}
-
 /*
  * What the mbcache registers as to get shrunk dynamically.
  */
@@ -132,12 +118,9 @@ __mb_cache_entry_is_hashed(struct mb_cache_entry *ce)
 static void
 __mb_cache_entry_unhash(struct mb_cache_entry *ce)
 {
-	int n;
-
 	if (__mb_cache_entry_is_hashed(ce)) {
 		list_del_init(&ce->e_block_list);
-		for (n=0; n<mb_cache_indexes(ce->e_cache); n++)
-			list_del(&ce->e_indexes[n].o_list);
+		list_del(&ce->e_index.o_list);
 	}
 }
 
@@ -148,16 +131,8 @@ __mb_cache_entry_forget(struct mb_cache_entry *ce, gfp_t gfp_mask)
 	struct mb_cache *cache = ce->e_cache;
 
 	mb_assert(!(ce->e_used || ce->e_queued));
-	if (cache->c_op.free && cache->c_op.free(ce, gfp_mask)) {
-		/* free failed -- put back on the lru list
-		   for freeing later. */
-		spin_lock(&mb_cache_spinlock);
-		list_add(&ce->e_lru_list, &mb_cache_lru_list);
-		spin_unlock(&mb_cache_spinlock);
-	} else {
-		kmem_cache_free(cache->c_entry_cache, ce);
-		atomic_dec(&cache->c_entry_count);
-	}
+	kmem_cache_free(cache->c_entry_cache, ce);
+	atomic_dec(&cache->c_entry_count);
 }
 
 
@@ -243,72 +218,49 @@ out:
  * memory was available.
  *
  * @name: name of the cache (informal)
- * @cache_op: contains the callback called when freeing a cache entry
- * @entry_size: The size of a cache entry, including
- *              struct mb_cache_entry
- * @indexes_count: number of additional indexes in the cache. Must equal
- *                 MB_CACHE_INDEXES_COUNT if the number of indexes is
- *                 hardwired.
  * @bucket_bits: log2(number of hash buckets)
  */
 struct mb_cache *
-mb_cache_create(const char *name, struct mb_cache_op *cache_op,
-		size_t entry_size, int indexes_count, int bucket_bits)
+mb_cache_create(const char *name, int bucket_bits)
 {
-	int m=0, n, bucket_count = 1 << bucket_bits;
+	int n, bucket_count = 1 << bucket_bits;
 	struct mb_cache *cache = NULL;
 
-	if(entry_size < sizeof(struct mb_cache_entry) +
-	   indexes_count * sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]))
-		return NULL;
-
-	cache = kmalloc(sizeof(struct mb_cache) +
-	                indexes_count * sizeof(struct list_head), GFP_KERNEL);
+	cache = kmalloc(sizeof(struct mb_cache), GFP_KERNEL);
 	if (!cache)
-		goto fail;
+		return NULL;
 	cache->c_name = name;
-	cache->c_op.free = NULL;
-	if (cache_op)
-		cache->c_op.free = cache_op->free;
 	atomic_set(&cache->c_entry_count, 0);
 	cache->c_bucket_bits = bucket_bits;
-#ifdef MB_CACHE_INDEXES_COUNT
-	mb_assert(indexes_count == MB_CACHE_INDEXES_COUNT);
-#else
-	cache->c_indexes_count = indexes_count;
-#endif
 	cache->c_block_hash = kmalloc(bucket_count * sizeof(struct list_head),
 	                              GFP_KERNEL);
 	if (!cache->c_block_hash)
 		goto fail;
 	for (n=0; n<bucket_count; n++)
 		INIT_LIST_HEAD(&cache->c_block_hash[n]);
-	for (m=0; m<indexes_count; m++) {
-		cache->c_indexes_hash[m] = kmalloc(bucket_count *
-		                                 sizeof(struct list_head),
-		                                 GFP_KERNEL);
-		if (!cache->c_indexes_hash[m])
-			goto fail;
-		for (n=0; n<bucket_count; n++)
-			INIT_LIST_HEAD(&cache->c_indexes_hash[m][n]);
-	}
-	cache->c_entry_cache = kmem_cache_create(name, entry_size, 0,
+	cache->c_index_hash = kmalloc(bucket_count * sizeof(struct list_head),
+				      GFP_KERNEL);
+	if (!cache->c_index_hash)
+		goto fail;
+	for (n=0; n<bucket_count; n++)
+		INIT_LIST_HEAD(&cache->c_index_hash[n]);
+	cache->c_entry_cache = kmem_cache_create(name,
+		sizeof(struct mb_cache_entry), 0,
 		SLAB_RECLAIM_ACCOUNT|SLAB_MEM_SPREAD, NULL);
 	if (!cache->c_entry_cache)
-		goto fail;
+		goto fail2;
 
 	spin_lock(&mb_cache_spinlock);
 	list_add(&cache->c_cache_list, &mb_cache_list);
 	spin_unlock(&mb_cache_spinlock);
 	return cache;
 
+fail2:
+	kfree(cache->c_index_hash);
+
 fail:
-	if (cache) {
-		while (--m >= 0)
-			kfree(cache->c_indexes_hash[m]);
-		kfree(cache->c_block_hash);
-		kfree(cache);
-	}
+	kfree(cache->c_block_hash);
+	kfree(cache);
 	return NULL;
 }
 
@@ -357,7 +309,6 @@ mb_cache_destroy(struct mb_cache *cache)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
-	int n;
 
 	spin_lock(&mb_cache_spinlock);
 	list_for_each_safe(l, ltmp, &mb_cache_lru_list) {
@@ -384,8 +335,7 @@ mb_cache_destroy(struct mb_cache *cache)
 
 	kmem_cache_destroy(cache->c_entry_cache);
 
-	for (n=0; n < mb_cache_indexes(cache); n++)
-		kfree(cache->c_indexes_hash[n]);
+	kfree(cache->c_index_hash);
 	kfree(cache->c_block_hash);
 	kfree(cache);
 }
@@ -429,17 +379,16 @@ mb_cache_entry_alloc(struct mb_cache *cache, gfp_t gfp_flags)
  *
  * @bdev: device the cache entry belongs to
  * @block: block number
- * @keys: array of additional keys. There must be indexes_count entries
- *        in the array (as specified when creating the cache).
+ * @key: lookup key
  */
 int
 mb_cache_entry_insert(struct mb_cache_entry *ce, struct block_device *bdev,
-		      sector_t block, unsigned int keys[])
+		      sector_t block, unsigned int key)
 {
 	struct mb_cache *cache = ce->e_cache;
 	unsigned int bucket;
 	struct list_head *l;
-	int error = -EBUSY, n;
+	int error = -EBUSY;
 
 	bucket = hash_long((unsigned long)bdev + (block & 0xffffffff), 
 			   cache->c_bucket_bits);
@@ -454,12 +403,9 @@ mb_cache_entry_insert(struct mb_cache_entry *ce, struct block_device *bdev,
 	ce->e_bdev = bdev;
 	ce->e_block = block;
 	list_add(&ce->e_block_list, &cache->c_block_hash[bucket]);
-	for (n=0; n<mb_cache_indexes(cache); n++) {
-		ce->e_indexes[n].o_key = keys[n];
-		bucket = hash_long(keys[n], cache->c_bucket_bits);
-		list_add(&ce->e_indexes[n].o_list,
-			 &cache->c_indexes_hash[n][bucket]);
-	}
+	ce->e_index.o_key = key;
+	bucket = hash_long(key, cache->c_bucket_bits);
+	list_add(&ce->e_index.o_list, &cache->c_index_hash[bucket]);
 	error = 0;
 out:
 	spin_unlock(&mb_cache_spinlock);
@@ -555,13 +501,12 @@ cleanup:
 
 static struct mb_cache_entry *
 __mb_cache_entry_find(struct list_head *l, struct list_head *head,
-		      int index, struct block_device *bdev, unsigned int key)
+		      struct block_device *bdev, unsigned int key)
 {
 	while (l != head) {
 		struct mb_cache_entry *ce =
-			list_entry(l, struct mb_cache_entry,
-			           e_indexes[index].o_list);
-		if (ce->e_bdev == bdev && ce->e_indexes[index].o_key == key) {
+			list_entry(l, struct mb_cache_entry, e_index.o_list);
+		if (ce->e_bdev == bdev && ce->e_index.o_key == key) {
 			DEFINE_WAIT(wait);
 
 			if (!list_empty(&ce->e_lru_list))
@@ -603,23 +548,20 @@ __mb_cache_entry_find(struct list_head *l, struct list_head *head,
  * returned cache entry is locked for shared access ("multiple readers").
  *
  * @cache: the cache to search
- * @index: the number of the additonal index to search (0<=index<indexes_count)
  * @bdev: the device the cache entry should belong to
  * @key: the key in the index
  */
 struct mb_cache_entry *
-mb_cache_entry_find_first(struct mb_cache *cache, int index,
-			  struct block_device *bdev, unsigned int key)
+mb_cache_entry_find_first(struct mb_cache *cache, struct block_device *bdev,
+			  unsigned int key)
 {
 	unsigned int bucket = hash_long(key, cache->c_bucket_bits);
 	struct list_head *l;
 	struct mb_cache_entry *ce;
 
-	mb_assert(index < mb_cache_indexes(cache));
 	spin_lock(&mb_cache_spinlock);
-	l = cache->c_indexes_hash[index][bucket].next;
-	ce = __mb_cache_entry_find(l, &cache->c_indexes_hash[index][bucket],
-	                           index, bdev, key);
+	l = cache->c_index_hash[bucket].next;
+	ce = __mb_cache_entry_find(l, &cache->c_index_hash[bucket], bdev, key);
 	spin_unlock(&mb_cache_spinlock);
 	return ce;
 }
@@ -640,12 +582,11 @@ mb_cache_entry_find_first(struct mb_cache *cache, int index,
  * }
  *
  * @prev: The previous match
- * @index: the number of the additonal index to search (0<=index<indexes_count)
  * @bdev: the device the cache entry should belong to
  * @key: the key in the index
  */
 struct mb_cache_entry *
-mb_cache_entry_find_next(struct mb_cache_entry *prev, int index,
+mb_cache_entry_find_next(struct mb_cache_entry *prev,
 			 struct block_device *bdev, unsigned int key)
 {
 	struct mb_cache *cache = prev->e_cache;
@@ -653,11 +594,9 @@ mb_cache_entry_find_next(struct mb_cache_entry *prev, int index,
 	struct list_head *l;
 	struct mb_cache_entry *ce;
 
-	mb_assert(index < mb_cache_indexes(cache));
 	spin_lock(&mb_cache_spinlock);
-	l = prev->e_indexes[index].o_list.next;
-	ce = __mb_cache_entry_find(l, &cache->c_indexes_hash[index][bucket],
-	                           index, bdev, key);
+	l = prev->e_index.o_list.next;
+	ce = __mb_cache_entry_find(l, &cache->c_index_hash[bucket], bdev, key);
 	__mb_cache_entry_release_unlock(prev);
 	return ce;
 }
diff --git a/include/linux/mbcache.h b/include/linux/mbcache.h
index a09b84e..54cbbac 100644
--- a/include/linux/mbcache.h
+++ b/include/linux/mbcache.h
@@ -4,9 +4,6 @@
   (C) 2001 by Andreas Gruenbacher, <a.gruenbacher@computer.org>
 */
 
-/* Hardwire the number of additional indexes */
-#define MB_CACHE_INDEXES_COUNT 1
-
 struct mb_cache_entry {
 	struct list_head		e_lru_list;
 	struct mb_cache			*e_cache;
@@ -18,17 +15,12 @@ struct mb_cache_entry {
 	struct {
 		struct list_head	o_list;
 		unsigned int		o_key;
-	} e_indexes[0];
-};
-
-struct mb_cache_op {
-	int (*free)(struct mb_cache_entry *, gfp_t);
+	} e_index;
 };
 
 /* Functions on caches */
 
-struct mb_cache * mb_cache_create(const char *, struct mb_cache_op *, size_t,
-				  int, int);
+struct mb_cache *mb_cache_create(const char *, int);
 void mb_cache_shrink(struct block_device *);
 void mb_cache_destroy(struct mb_cache *);
 
@@ -36,17 +28,15 @@ void mb_cache_destroy(struct mb_cache *);
 
 struct mb_cache_entry *mb_cache_entry_alloc(struct mb_cache *, gfp_t);
 int mb_cache_entry_insert(struct mb_cache_entry *, struct block_device *,
-			  sector_t, unsigned int[]);
+			  sector_t, unsigned int);
 void mb_cache_entry_release(struct mb_cache_entry *);
 void mb_cache_entry_free(struct mb_cache_entry *);
 struct mb_cache_entry *mb_cache_entry_get(struct mb_cache *,
 					  struct block_device *,
 					  sector_t);
-#if !defined(MB_CACHE_INDEXES_COUNT) || (MB_CACHE_INDEXES_COUNT > 0)
-struct mb_cache_entry *mb_cache_entry_find_first(struct mb_cache *cache, int,
+struct mb_cache_entry *mb_cache_entry_find_first(struct mb_cache *cache,
 						 struct block_device *, 
 						 unsigned int);
-struct mb_cache_entry *mb_cache_entry_find_next(struct mb_cache_entry *, int,
+struct mb_cache_entry *mb_cache_entry_find_next(struct mb_cache_entry *,
 						struct block_device *, 
 						unsigned int);
-#endif
-- 
1.7.2.rc3.57.g77b5b



^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 1/2] mbcache: Remove unused features
@ 2010-07-19 16:19     ` Andreas Gruenbacher
  0 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-19 16:19 UTC (permalink / raw)
  To: Alexander Viro
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

The mbcache code was written to support a variable number of indexes,
but all the existing users use exactly one index.  Simplify to code to
support only that case.

There are also no users of the cache entry free operation, and none of
the users keep extra data in cache entries.  Remove those features as
well.

Signed-off-by: Andreas Gruenbacher <agruen@suse.de>
---
 fs/ext2/xattr.c         |   12 ++--
 fs/ext3/xattr.c         |   12 ++--
 fs/ext4/xattr.c         |   12 ++--
 fs/mbcache.c            |  141 +++++++++++++---------------------------------
 include/linux/mbcache.h |   20 ++-----
 5 files changed, 60 insertions(+), 137 deletions(-)

diff --git a/fs/ext2/xattr.c b/fs/ext2/xattr.c
index 7c39157..af59915 100644
--- a/fs/ext2/xattr.c
+++ b/fs/ext2/xattr.c
@@ -838,7 +838,7 @@ ext2_xattr_cache_insert(struct buffer_head *bh)
 	ce = mb_cache_entry_alloc(ext2_xattr_cache, GFP_NOFS);
 	if (!ce)
 		return -ENOMEM;
-	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, &hash);
+	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, hash);
 	if (error) {
 		mb_cache_entry_free(ce);
 		if (error == -EBUSY) {
@@ -912,8 +912,8 @@ ext2_xattr_cache_find(struct inode *inode, struct ext2_xattr_header *header)
 		return NULL;  /* never share */
 	ea_idebug(inode, "looking for cached blocks [%x]", (int)hash);
 again:
-	ce = mb_cache_entry_find_first(ext2_xattr_cache, 0,
-				       inode->i_sb->s_bdev, hash);
+	ce = mb_cache_entry_find_first(ext2_xattr_cache, inode->i_sb->s_bdev,
+				       hash);
 	while (ce) {
 		struct buffer_head *bh;
 
@@ -945,7 +945,7 @@ again:
 			unlock_buffer(bh);
 			brelse(bh);
 		}
-		ce = mb_cache_entry_find_next(ce, 0, inode->i_sb->s_bdev, hash);
+		ce = mb_cache_entry_find_next(ce, inode->i_sb->s_bdev, hash);
 	}
 	return NULL;
 }
@@ -1021,9 +1021,7 @@ static void ext2_xattr_rehash(struct ext2_xattr_header *header,
 int __init
 init_ext2_xattr(void)
 {
-	ext2_xattr_cache = mb_cache_create("ext2_xattr", NULL,
-		sizeof(struct mb_cache_entry) +
-		sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]), 1, 6);
+	ext2_xattr_cache = mb_cache_create("ext2_xattr", 6);
 	if (!ext2_xattr_cache)
 		return -ENOMEM;
 	return 0;
diff --git a/fs/ext3/xattr.c b/fs/ext3/xattr.c
index 71fb8d6..e69dc6d 100644
--- a/fs/ext3/xattr.c
+++ b/fs/ext3/xattr.c
@@ -1139,7 +1139,7 @@ ext3_xattr_cache_insert(struct buffer_head *bh)
 		ea_bdebug(bh, "out of memory");
 		return;
 	}
-	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, &hash);
+	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, hash);
 	if (error) {
 		mb_cache_entry_free(ce);
 		if (error == -EBUSY) {
@@ -1211,8 +1211,8 @@ ext3_xattr_cache_find(struct inode *inode, struct ext3_xattr_header *header,
 		return NULL;  /* never share */
 	ea_idebug(inode, "looking for cached blocks [%x]", (int)hash);
 again:
-	ce = mb_cache_entry_find_first(ext3_xattr_cache, 0,
-				       inode->i_sb->s_bdev, hash);
+	ce = mb_cache_entry_find_first(ext3_xattr_cache, inode->i_sb->s_bdev,
+				       hash);
 	while (ce) {
 		struct buffer_head *bh;
 
@@ -1237,7 +1237,7 @@ again:
 			return bh;
 		}
 		brelse(bh);
-		ce = mb_cache_entry_find_next(ce, 0, inode->i_sb->s_bdev, hash);
+		ce = mb_cache_entry_find_next(ce, inode->i_sb->s_bdev, hash);
 	}
 	return NULL;
 }
@@ -1313,9 +1313,7 @@ static void ext3_xattr_rehash(struct ext3_xattr_header *header,
 int __init
 init_ext3_xattr(void)
 {
-	ext3_xattr_cache = mb_cache_create("ext3_xattr", NULL,
-		sizeof(struct mb_cache_entry) +
-		sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]), 1, 6);
+	ext3_xattr_cache = mb_cache_create("ext3_xattr", 6);
 	if (!ext3_xattr_cache)
 		return -ENOMEM;
 	return 0;
diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
index 0433800..1c93198 100644
--- a/fs/ext4/xattr.c
+++ b/fs/ext4/xattr.c
@@ -1418,7 +1418,7 @@ ext4_xattr_cache_insert(struct buffer_head *bh)
 		ea_bdebug(bh, "out of memory");
 		return;
 	}
-	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, &hash);
+	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, hash);
 	if (error) {
 		mb_cache_entry_free(ce);
 		if (error == -EBUSY) {
@@ -1490,8 +1490,8 @@ ext4_xattr_cache_find(struct inode *inode, struct ext4_xattr_header *header,
 		return NULL;  /* never share */
 	ea_idebug(inode, "looking for cached blocks [%x]", (int)hash);
 again:
-	ce = mb_cache_entry_find_first(ext4_xattr_cache, 0,
-				       inode->i_sb->s_bdev, hash);
+	ce = mb_cache_entry_find_first(ext4_xattr_cache, inode->i_sb->s_bdev,
+				       hash);
 	while (ce) {
 		struct buffer_head *bh;
 
@@ -1515,7 +1515,7 @@ again:
 			return bh;
 		}
 		brelse(bh);
-		ce = mb_cache_entry_find_next(ce, 0, inode->i_sb->s_bdev, hash);
+		ce = mb_cache_entry_find_next(ce, inode->i_sb->s_bdev, hash);
 	}
 	return NULL;
 }
@@ -1591,9 +1591,7 @@ static void ext4_xattr_rehash(struct ext4_xattr_header *header,
 int __init
 init_ext4_xattr(void)
 {
-	ext4_xattr_cache = mb_cache_create("ext4_xattr", NULL,
-		sizeof(struct mb_cache_entry) +
-		sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]), 1, 6);
+	ext4_xattr_cache = mb_cache_create("ext4_xattr", 6);
 	if (!ext4_xattr_cache)
 		return -ENOMEM;
 	return 0;
diff --git a/fs/mbcache.c b/fs/mbcache.c
index e28f21b..8a2cbd8 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -79,15 +79,11 @@ EXPORT_SYMBOL(mb_cache_entry_find_next);
 struct mb_cache {
 	struct list_head		c_cache_list;
 	const char			*c_name;
-	struct mb_cache_op		c_op;
 	atomic_t			c_entry_count;
 	int				c_bucket_bits;
-#ifndef MB_CACHE_INDEXES_COUNT
-	int				c_indexes_count;
-#endif
-	struct kmem_cache			*c_entry_cache;
+	struct kmem_cache		*c_entry_cache;
 	struct list_head		*c_block_hash;
-	struct list_head		*c_indexes_hash[0];
+	struct list_head		*c_index_hash;
 };
 
 
@@ -101,16 +97,6 @@ static LIST_HEAD(mb_cache_list);
 static LIST_HEAD(mb_cache_lru_list);
 static DEFINE_SPINLOCK(mb_cache_spinlock);
 
-static inline int
-mb_cache_indexes(struct mb_cache *cache)
-{
-#ifdef MB_CACHE_INDEXES_COUNT
-	return MB_CACHE_INDEXES_COUNT;
-#else
-	return cache->c_indexes_count;
-#endif
-}
-
 /*
  * What the mbcache registers as to get shrunk dynamically.
  */
@@ -132,12 +118,9 @@ __mb_cache_entry_is_hashed(struct mb_cache_entry *ce)
 static void
 __mb_cache_entry_unhash(struct mb_cache_entry *ce)
 {
-	int n;
-
 	if (__mb_cache_entry_is_hashed(ce)) {
 		list_del_init(&ce->e_block_list);
-		for (n=0; n<mb_cache_indexes(ce->e_cache); n++)
-			list_del(&ce->e_indexes[n].o_list);
+		list_del(&ce->e_index.o_list);
 	}
 }
 
@@ -148,16 +131,8 @@ __mb_cache_entry_forget(struct mb_cache_entry *ce, gfp_t gfp_mask)
 	struct mb_cache *cache = ce->e_cache;
 
 	mb_assert(!(ce->e_used || ce->e_queued));
-	if (cache->c_op.free && cache->c_op.free(ce, gfp_mask)) {
-		/* free failed -- put back on the lru list
-		   for freeing later. */
-		spin_lock(&mb_cache_spinlock);
-		list_add(&ce->e_lru_list, &mb_cache_lru_list);
-		spin_unlock(&mb_cache_spinlock);
-	} else {
-		kmem_cache_free(cache->c_entry_cache, ce);
-		atomic_dec(&cache->c_entry_count);
-	}
+	kmem_cache_free(cache->c_entry_cache, ce);
+	atomic_dec(&cache->c_entry_count);
 }
 
 
@@ -243,72 +218,49 @@ out:
  * memory was available.
  *
  * @name: name of the cache (informal)
- * @cache_op: contains the callback called when freeing a cache entry
- * @entry_size: The size of a cache entry, including
- *              struct mb_cache_entry
- * @indexes_count: number of additional indexes in the cache. Must equal
- *                 MB_CACHE_INDEXES_COUNT if the number of indexes is
- *                 hardwired.
  * @bucket_bits: log2(number of hash buckets)
  */
 struct mb_cache *
-mb_cache_create(const char *name, struct mb_cache_op *cache_op,
-		size_t entry_size, int indexes_count, int bucket_bits)
+mb_cache_create(const char *name, int bucket_bits)
 {
-	int m=0, n, bucket_count = 1 << bucket_bits;
+	int n, bucket_count = 1 << bucket_bits;
 	struct mb_cache *cache = NULL;
 
-	if(entry_size < sizeof(struct mb_cache_entry) +
-	   indexes_count * sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]))
-		return NULL;
-
-	cache = kmalloc(sizeof(struct mb_cache) +
-	                indexes_count * sizeof(struct list_head), GFP_KERNEL);
+	cache = kmalloc(sizeof(struct mb_cache), GFP_KERNEL);
 	if (!cache)
-		goto fail;
+		return NULL;
 	cache->c_name = name;
-	cache->c_op.free = NULL;
-	if (cache_op)
-		cache->c_op.free = cache_op->free;
 	atomic_set(&cache->c_entry_count, 0);
 	cache->c_bucket_bits = bucket_bits;
-#ifdef MB_CACHE_INDEXES_COUNT
-	mb_assert(indexes_count == MB_CACHE_INDEXES_COUNT);
-#else
-	cache->c_indexes_count = indexes_count;
-#endif
 	cache->c_block_hash = kmalloc(bucket_count * sizeof(struct list_head),
 	                              GFP_KERNEL);
 	if (!cache->c_block_hash)
 		goto fail;
 	for (n=0; n<bucket_count; n++)
 		INIT_LIST_HEAD(&cache->c_block_hash[n]);
-	for (m=0; m<indexes_count; m++) {
-		cache->c_indexes_hash[m] = kmalloc(bucket_count *
-		                                 sizeof(struct list_head),
-		                                 GFP_KERNEL);
-		if (!cache->c_indexes_hash[m])
-			goto fail;
-		for (n=0; n<bucket_count; n++)
-			INIT_LIST_HEAD(&cache->c_indexes_hash[m][n]);
-	}
-	cache->c_entry_cache = kmem_cache_create(name, entry_size, 0,
+	cache->c_index_hash = kmalloc(bucket_count * sizeof(struct list_head),
+				      GFP_KERNEL);
+	if (!cache->c_index_hash)
+		goto fail;
+	for (n=0; n<bucket_count; n++)
+		INIT_LIST_HEAD(&cache->c_index_hash[n]);
+	cache->c_entry_cache = kmem_cache_create(name,
+		sizeof(struct mb_cache_entry), 0,
 		SLAB_RECLAIM_ACCOUNT|SLAB_MEM_SPREAD, NULL);
 	if (!cache->c_entry_cache)
-		goto fail;
+		goto fail2;
 
 	spin_lock(&mb_cache_spinlock);
 	list_add(&cache->c_cache_list, &mb_cache_list);
 	spin_unlock(&mb_cache_spinlock);
 	return cache;
 
+fail2:
+	kfree(cache->c_index_hash);
+
 fail:
-	if (cache) {
-		while (--m >= 0)
-			kfree(cache->c_indexes_hash[m]);
-		kfree(cache->c_block_hash);
-		kfree(cache);
-	}
+	kfree(cache->c_block_hash);
+	kfree(cache);
 	return NULL;
 }
 
@@ -357,7 +309,6 @@ mb_cache_destroy(struct mb_cache *cache)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
-	int n;
 
 	spin_lock(&mb_cache_spinlock);
 	list_for_each_safe(l, ltmp, &mb_cache_lru_list) {
@@ -384,8 +335,7 @@ mb_cache_destroy(struct mb_cache *cache)
 
 	kmem_cache_destroy(cache->c_entry_cache);
 
-	for (n=0; n < mb_cache_indexes(cache); n++)
-		kfree(cache->c_indexes_hash[n]);
+	kfree(cache->c_index_hash);
 	kfree(cache->c_block_hash);
 	kfree(cache);
 }
@@ -429,17 +379,16 @@ mb_cache_entry_alloc(struct mb_cache *cache, gfp_t gfp_flags)
  *
  * @bdev: device the cache entry belongs to
  * @block: block number
- * @keys: array of additional keys. There must be indexes_count entries
- *        in the array (as specified when creating the cache).
+ * @key: lookup key
  */
 int
 mb_cache_entry_insert(struct mb_cache_entry *ce, struct block_device *bdev,
-		      sector_t block, unsigned int keys[])
+		      sector_t block, unsigned int key)
 {
 	struct mb_cache *cache = ce->e_cache;
 	unsigned int bucket;
 	struct list_head *l;
-	int error = -EBUSY, n;
+	int error = -EBUSY;
 
 	bucket = hash_long((unsigned long)bdev + (block & 0xffffffff), 
 			   cache->c_bucket_bits);
@@ -454,12 +403,9 @@ mb_cache_entry_insert(struct mb_cache_entry *ce, struct block_device *bdev,
 	ce->e_bdev = bdev;
 	ce->e_block = block;
 	list_add(&ce->e_block_list, &cache->c_block_hash[bucket]);
-	for (n=0; n<mb_cache_indexes(cache); n++) {
-		ce->e_indexes[n].o_key = keys[n];
-		bucket = hash_long(keys[n], cache->c_bucket_bits);
-		list_add(&ce->e_indexes[n].o_list,
-			 &cache->c_indexes_hash[n][bucket]);
-	}
+	ce->e_index.o_key = key;
+	bucket = hash_long(key, cache->c_bucket_bits);
+	list_add(&ce->e_index.o_list, &cache->c_index_hash[bucket]);
 	error = 0;
 out:
 	spin_unlock(&mb_cache_spinlock);
@@ -555,13 +501,12 @@ cleanup:
 
 static struct mb_cache_entry *
 __mb_cache_entry_find(struct list_head *l, struct list_head *head,
-		      int index, struct block_device *bdev, unsigned int key)
+		      struct block_device *bdev, unsigned int key)
 {
 	while (l != head) {
 		struct mb_cache_entry *ce =
-			list_entry(l, struct mb_cache_entry,
-			           e_indexes[index].o_list);
-		if (ce->e_bdev == bdev && ce->e_indexes[index].o_key == key) {
+			list_entry(l, struct mb_cache_entry, e_index.o_list);
+		if (ce->e_bdev == bdev && ce->e_index.o_key == key) {
 			DEFINE_WAIT(wait);
 
 			if (!list_empty(&ce->e_lru_list))
@@ -603,23 +548,20 @@ __mb_cache_entry_find(struct list_head *l, struct list_head *head,
  * returned cache entry is locked for shared access ("multiple readers").
  *
  * @cache: the cache to search
- * @index: the number of the additonal index to search (0<=index<indexes_count)
  * @bdev: the device the cache entry should belong to
  * @key: the key in the index
  */
 struct mb_cache_entry *
-mb_cache_entry_find_first(struct mb_cache *cache, int index,
-			  struct block_device *bdev, unsigned int key)
+mb_cache_entry_find_first(struct mb_cache *cache, struct block_device *bdev,
+			  unsigned int key)
 {
 	unsigned int bucket = hash_long(key, cache->c_bucket_bits);
 	struct list_head *l;
 	struct mb_cache_entry *ce;
 
-	mb_assert(index < mb_cache_indexes(cache));
 	spin_lock(&mb_cache_spinlock);
-	l = cache->c_indexes_hash[index][bucket].next;
-	ce = __mb_cache_entry_find(l, &cache->c_indexes_hash[index][bucket],
-	                           index, bdev, key);
+	l = cache->c_index_hash[bucket].next;
+	ce = __mb_cache_entry_find(l, &cache->c_index_hash[bucket], bdev, key);
 	spin_unlock(&mb_cache_spinlock);
 	return ce;
 }
@@ -640,12 +582,11 @@ mb_cache_entry_find_first(struct mb_cache *cache, int index,
  * }
  *
  * @prev: The previous match
- * @index: the number of the additonal index to search (0<=index<indexes_count)
  * @bdev: the device the cache entry should belong to
  * @key: the key in the index
  */
 struct mb_cache_entry *
-mb_cache_entry_find_next(struct mb_cache_entry *prev, int index,
+mb_cache_entry_find_next(struct mb_cache_entry *prev,
 			 struct block_device *bdev, unsigned int key)
 {
 	struct mb_cache *cache = prev->e_cache;
@@ -653,11 +594,9 @@ mb_cache_entry_find_next(struct mb_cache_entry *prev, int index,
 	struct list_head *l;
 	struct mb_cache_entry *ce;
 
-	mb_assert(index < mb_cache_indexes(cache));
 	spin_lock(&mb_cache_spinlock);
-	l = prev->e_indexes[index].o_list.next;
-	ce = __mb_cache_entry_find(l, &cache->c_indexes_hash[index][bucket],
-	                           index, bdev, key);
+	l = prev->e_index.o_list.next;
+	ce = __mb_cache_entry_find(l, &cache->c_index_hash[bucket], bdev, key);
 	__mb_cache_entry_release_unlock(prev);
 	return ce;
 }
diff --git a/include/linux/mbcache.h b/include/linux/mbcache.h
index a09b84e..54cbbac 100644
--- a/include/linux/mbcache.h
+++ b/include/linux/mbcache.h
@@ -4,9 +4,6 @@
   (C) 2001 by Andreas Gruenbacher, <a.gruenbacher@computer.org>
 */
 
-/* Hardwire the number of additional indexes */
-#define MB_CACHE_INDEXES_COUNT 1
-
 struct mb_cache_entry {
 	struct list_head		e_lru_list;
 	struct mb_cache			*e_cache;
@@ -18,17 +15,12 @@ struct mb_cache_entry {
 	struct {
 		struct list_head	o_list;
 		unsigned int		o_key;
-	} e_indexes[0];
-};
-
-struct mb_cache_op {
-	int (*free)(struct mb_cache_entry *, gfp_t);
+	} e_index;
 };
 
 /* Functions on caches */
 
-struct mb_cache * mb_cache_create(const char *, struct mb_cache_op *, size_t,
-				  int, int);
+struct mb_cache *mb_cache_create(const char *, int);
 void mb_cache_shrink(struct block_device *);
 void mb_cache_destroy(struct mb_cache *);
 
@@ -36,17 +28,15 @@ void mb_cache_destroy(struct mb_cache *);
 
 struct mb_cache_entry *mb_cache_entry_alloc(struct mb_cache *, gfp_t);
 int mb_cache_entry_insert(struct mb_cache_entry *, struct block_device *,
-			  sector_t, unsigned int[]);
+			  sector_t, unsigned int);
 void mb_cache_entry_release(struct mb_cache_entry *);
 void mb_cache_entry_free(struct mb_cache_entry *);
 struct mb_cache_entry *mb_cache_entry_get(struct mb_cache *,
 					  struct block_device *,
 					  sector_t);
-#if !defined(MB_CACHE_INDEXES_COUNT) || (MB_CACHE_INDEXES_COUNT > 0)
-struct mb_cache_entry *mb_cache_entry_find_first(struct mb_cache *cache, int,
+struct mb_cache_entry *mb_cache_entry_find_first(struct mb_cache *cache,
 						 struct block_device *, 
 						 unsigned int);
-struct mb_cache_entry *mb_cache_entry_find_next(struct mb_cache_entry *, int,
+struct mb_cache_entry *mb_cache_entry_find_next(struct mb_cache_entry *,
 						struct block_device *, 
 						unsigned int);
-#endif
-- 
1.7.2.rc3.57.g77b5b


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 1/2] mbcache: Remove unused features
@ 2010-07-19 16:19     ` Andreas Gruenbacher
  0 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-19 16:19 UTC (permalink / raw)
  To: Alexander Viro
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

The mbcache code was written to support a variable number of indexes,
but all the existing users use exactly one index.  Simplify to code to
support only that case.

There are also no users of the cache entry free operation, and none of
the users keep extra data in cache entries.  Remove those features as
well.

Signed-off-by: Andreas Gruenbacher <agruen@suse.de>
---
 fs/ext2/xattr.c         |   12 ++--
 fs/ext3/xattr.c         |   12 ++--
 fs/ext4/xattr.c         |   12 ++--
 fs/mbcache.c            |  141 +++++++++++++---------------------------------
 include/linux/mbcache.h |   20 ++-----
 5 files changed, 60 insertions(+), 137 deletions(-)

diff --git a/fs/ext2/xattr.c b/fs/ext2/xattr.c
index 7c39157..af59915 100644
--- a/fs/ext2/xattr.c
+++ b/fs/ext2/xattr.c
@@ -838,7 +838,7 @@ ext2_xattr_cache_insert(struct buffer_head *bh)
 	ce = mb_cache_entry_alloc(ext2_xattr_cache, GFP_NOFS);
 	if (!ce)
 		return -ENOMEM;
-	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, &hash);
+	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, hash);
 	if (error) {
 		mb_cache_entry_free(ce);
 		if (error = -EBUSY) {
@@ -912,8 +912,8 @@ ext2_xattr_cache_find(struct inode *inode, struct ext2_xattr_header *header)
 		return NULL;  /* never share */
 	ea_idebug(inode, "looking for cached blocks [%x]", (int)hash);
 again:
-	ce = mb_cache_entry_find_first(ext2_xattr_cache, 0,
-				       inode->i_sb->s_bdev, hash);
+	ce = mb_cache_entry_find_first(ext2_xattr_cache, inode->i_sb->s_bdev,
+				       hash);
 	while (ce) {
 		struct buffer_head *bh;
 
@@ -945,7 +945,7 @@ again:
 			unlock_buffer(bh);
 			brelse(bh);
 		}
-		ce = mb_cache_entry_find_next(ce, 0, inode->i_sb->s_bdev, hash);
+		ce = mb_cache_entry_find_next(ce, inode->i_sb->s_bdev, hash);
 	}
 	return NULL;
 }
@@ -1021,9 +1021,7 @@ static void ext2_xattr_rehash(struct ext2_xattr_header *header,
 int __init
 init_ext2_xattr(void)
 {
-	ext2_xattr_cache = mb_cache_create("ext2_xattr", NULL,
-		sizeof(struct mb_cache_entry) +
-		sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]), 1, 6);
+	ext2_xattr_cache = mb_cache_create("ext2_xattr", 6);
 	if (!ext2_xattr_cache)
 		return -ENOMEM;
 	return 0;
diff --git a/fs/ext3/xattr.c b/fs/ext3/xattr.c
index 71fb8d6..e69dc6d 100644
--- a/fs/ext3/xattr.c
+++ b/fs/ext3/xattr.c
@@ -1139,7 +1139,7 @@ ext3_xattr_cache_insert(struct buffer_head *bh)
 		ea_bdebug(bh, "out of memory");
 		return;
 	}
-	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, &hash);
+	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, hash);
 	if (error) {
 		mb_cache_entry_free(ce);
 		if (error = -EBUSY) {
@@ -1211,8 +1211,8 @@ ext3_xattr_cache_find(struct inode *inode, struct ext3_xattr_header *header,
 		return NULL;  /* never share */
 	ea_idebug(inode, "looking for cached blocks [%x]", (int)hash);
 again:
-	ce = mb_cache_entry_find_first(ext3_xattr_cache, 0,
-				       inode->i_sb->s_bdev, hash);
+	ce = mb_cache_entry_find_first(ext3_xattr_cache, inode->i_sb->s_bdev,
+				       hash);
 	while (ce) {
 		struct buffer_head *bh;
 
@@ -1237,7 +1237,7 @@ again:
 			return bh;
 		}
 		brelse(bh);
-		ce = mb_cache_entry_find_next(ce, 0, inode->i_sb->s_bdev, hash);
+		ce = mb_cache_entry_find_next(ce, inode->i_sb->s_bdev, hash);
 	}
 	return NULL;
 }
@@ -1313,9 +1313,7 @@ static void ext3_xattr_rehash(struct ext3_xattr_header *header,
 int __init
 init_ext3_xattr(void)
 {
-	ext3_xattr_cache = mb_cache_create("ext3_xattr", NULL,
-		sizeof(struct mb_cache_entry) +
-		sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]), 1, 6);
+	ext3_xattr_cache = mb_cache_create("ext3_xattr", 6);
 	if (!ext3_xattr_cache)
 		return -ENOMEM;
 	return 0;
diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
index 0433800..1c93198 100644
--- a/fs/ext4/xattr.c
+++ b/fs/ext4/xattr.c
@@ -1418,7 +1418,7 @@ ext4_xattr_cache_insert(struct buffer_head *bh)
 		ea_bdebug(bh, "out of memory");
 		return;
 	}
-	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, &hash);
+	error = mb_cache_entry_insert(ce, bh->b_bdev, bh->b_blocknr, hash);
 	if (error) {
 		mb_cache_entry_free(ce);
 		if (error = -EBUSY) {
@@ -1490,8 +1490,8 @@ ext4_xattr_cache_find(struct inode *inode, struct ext4_xattr_header *header,
 		return NULL;  /* never share */
 	ea_idebug(inode, "looking for cached blocks [%x]", (int)hash);
 again:
-	ce = mb_cache_entry_find_first(ext4_xattr_cache, 0,
-				       inode->i_sb->s_bdev, hash);
+	ce = mb_cache_entry_find_first(ext4_xattr_cache, inode->i_sb->s_bdev,
+				       hash);
 	while (ce) {
 		struct buffer_head *bh;
 
@@ -1515,7 +1515,7 @@ again:
 			return bh;
 		}
 		brelse(bh);
-		ce = mb_cache_entry_find_next(ce, 0, inode->i_sb->s_bdev, hash);
+		ce = mb_cache_entry_find_next(ce, inode->i_sb->s_bdev, hash);
 	}
 	return NULL;
 }
@@ -1591,9 +1591,7 @@ static void ext4_xattr_rehash(struct ext4_xattr_header *header,
 int __init
 init_ext4_xattr(void)
 {
-	ext4_xattr_cache = mb_cache_create("ext4_xattr", NULL,
-		sizeof(struct mb_cache_entry) +
-		sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]), 1, 6);
+	ext4_xattr_cache = mb_cache_create("ext4_xattr", 6);
 	if (!ext4_xattr_cache)
 		return -ENOMEM;
 	return 0;
diff --git a/fs/mbcache.c b/fs/mbcache.c
index e28f21b..8a2cbd8 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -79,15 +79,11 @@ EXPORT_SYMBOL(mb_cache_entry_find_next);
 struct mb_cache {
 	struct list_head		c_cache_list;
 	const char			*c_name;
-	struct mb_cache_op		c_op;
 	atomic_t			c_entry_count;
 	int				c_bucket_bits;
-#ifndef MB_CACHE_INDEXES_COUNT
-	int				c_indexes_count;
-#endif
-	struct kmem_cache			*c_entry_cache;
+	struct kmem_cache		*c_entry_cache;
 	struct list_head		*c_block_hash;
-	struct list_head		*c_indexes_hash[0];
+	struct list_head		*c_index_hash;
 };
 
 
@@ -101,16 +97,6 @@ static LIST_HEAD(mb_cache_list);
 static LIST_HEAD(mb_cache_lru_list);
 static DEFINE_SPINLOCK(mb_cache_spinlock);
 
-static inline int
-mb_cache_indexes(struct mb_cache *cache)
-{
-#ifdef MB_CACHE_INDEXES_COUNT
-	return MB_CACHE_INDEXES_COUNT;
-#else
-	return cache->c_indexes_count;
-#endif
-}
-
 /*
  * What the mbcache registers as to get shrunk dynamically.
  */
@@ -132,12 +118,9 @@ __mb_cache_entry_is_hashed(struct mb_cache_entry *ce)
 static void
 __mb_cache_entry_unhash(struct mb_cache_entry *ce)
 {
-	int n;
-
 	if (__mb_cache_entry_is_hashed(ce)) {
 		list_del_init(&ce->e_block_list);
-		for (n=0; n<mb_cache_indexes(ce->e_cache); n++)
-			list_del(&ce->e_indexes[n].o_list);
+		list_del(&ce->e_index.o_list);
 	}
 }
 
@@ -148,16 +131,8 @@ __mb_cache_entry_forget(struct mb_cache_entry *ce, gfp_t gfp_mask)
 	struct mb_cache *cache = ce->e_cache;
 
 	mb_assert(!(ce->e_used || ce->e_queued));
-	if (cache->c_op.free && cache->c_op.free(ce, gfp_mask)) {
-		/* free failed -- put back on the lru list
-		   for freeing later. */
-		spin_lock(&mb_cache_spinlock);
-		list_add(&ce->e_lru_list, &mb_cache_lru_list);
-		spin_unlock(&mb_cache_spinlock);
-	} else {
-		kmem_cache_free(cache->c_entry_cache, ce);
-		atomic_dec(&cache->c_entry_count);
-	}
+	kmem_cache_free(cache->c_entry_cache, ce);
+	atomic_dec(&cache->c_entry_count);
 }
 
 
@@ -243,72 +218,49 @@ out:
  * memory was available.
  *
  * @name: name of the cache (informal)
- * @cache_op: contains the callback called when freeing a cache entry
- * @entry_size: The size of a cache entry, including
- *              struct mb_cache_entry
- * @indexes_count: number of additional indexes in the cache. Must equal
- *                 MB_CACHE_INDEXES_COUNT if the number of indexes is
- *                 hardwired.
  * @bucket_bits: log2(number of hash buckets)
  */
 struct mb_cache *
-mb_cache_create(const char *name, struct mb_cache_op *cache_op,
-		size_t entry_size, int indexes_count, int bucket_bits)
+mb_cache_create(const char *name, int bucket_bits)
 {
-	int m=0, n, bucket_count = 1 << bucket_bits;
+	int n, bucket_count = 1 << bucket_bits;
 	struct mb_cache *cache = NULL;
 
-	if(entry_size < sizeof(struct mb_cache_entry) +
-	   indexes_count * sizeof(((struct mb_cache_entry *) 0)->e_indexes[0]))
-		return NULL;
-
-	cache = kmalloc(sizeof(struct mb_cache) +
-	                indexes_count * sizeof(struct list_head), GFP_KERNEL);
+	cache = kmalloc(sizeof(struct mb_cache), GFP_KERNEL);
 	if (!cache)
-		goto fail;
+		return NULL;
 	cache->c_name = name;
-	cache->c_op.free = NULL;
-	if (cache_op)
-		cache->c_op.free = cache_op->free;
 	atomic_set(&cache->c_entry_count, 0);
 	cache->c_bucket_bits = bucket_bits;
-#ifdef MB_CACHE_INDEXES_COUNT
-	mb_assert(indexes_count = MB_CACHE_INDEXES_COUNT);
-#else
-	cache->c_indexes_count = indexes_count;
-#endif
 	cache->c_block_hash = kmalloc(bucket_count * sizeof(struct list_head),
 	                              GFP_KERNEL);
 	if (!cache->c_block_hash)
 		goto fail;
 	for (n=0; n<bucket_count; n++)
 		INIT_LIST_HEAD(&cache->c_block_hash[n]);
-	for (m=0; m<indexes_count; m++) {
-		cache->c_indexes_hash[m] = kmalloc(bucket_count *
-		                                 sizeof(struct list_head),
-		                                 GFP_KERNEL);
-		if (!cache->c_indexes_hash[m])
-			goto fail;
-		for (n=0; n<bucket_count; n++)
-			INIT_LIST_HEAD(&cache->c_indexes_hash[m][n]);
-	}
-	cache->c_entry_cache = kmem_cache_create(name, entry_size, 0,
+	cache->c_index_hash = kmalloc(bucket_count * sizeof(struct list_head),
+				      GFP_KERNEL);
+	if (!cache->c_index_hash)
+		goto fail;
+	for (n=0; n<bucket_count; n++)
+		INIT_LIST_HEAD(&cache->c_index_hash[n]);
+	cache->c_entry_cache = kmem_cache_create(name,
+		sizeof(struct mb_cache_entry), 0,
 		SLAB_RECLAIM_ACCOUNT|SLAB_MEM_SPREAD, NULL);
 	if (!cache->c_entry_cache)
-		goto fail;
+		goto fail2;
 
 	spin_lock(&mb_cache_spinlock);
 	list_add(&cache->c_cache_list, &mb_cache_list);
 	spin_unlock(&mb_cache_spinlock);
 	return cache;
 
+fail2:
+	kfree(cache->c_index_hash);
+
 fail:
-	if (cache) {
-		while (--m >= 0)
-			kfree(cache->c_indexes_hash[m]);
-		kfree(cache->c_block_hash);
-		kfree(cache);
-	}
+	kfree(cache->c_block_hash);
+	kfree(cache);
 	return NULL;
 }
 
@@ -357,7 +309,6 @@ mb_cache_destroy(struct mb_cache *cache)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
-	int n;
 
 	spin_lock(&mb_cache_spinlock);
 	list_for_each_safe(l, ltmp, &mb_cache_lru_list) {
@@ -384,8 +335,7 @@ mb_cache_destroy(struct mb_cache *cache)
 
 	kmem_cache_destroy(cache->c_entry_cache);
 
-	for (n=0; n < mb_cache_indexes(cache); n++)
-		kfree(cache->c_indexes_hash[n]);
+	kfree(cache->c_index_hash);
 	kfree(cache->c_block_hash);
 	kfree(cache);
 }
@@ -429,17 +379,16 @@ mb_cache_entry_alloc(struct mb_cache *cache, gfp_t gfp_flags)
  *
  * @bdev: device the cache entry belongs to
  * @block: block number
- * @keys: array of additional keys. There must be indexes_count entries
- *        in the array (as specified when creating the cache).
+ * @key: lookup key
  */
 int
 mb_cache_entry_insert(struct mb_cache_entry *ce, struct block_device *bdev,
-		      sector_t block, unsigned int keys[])
+		      sector_t block, unsigned int key)
 {
 	struct mb_cache *cache = ce->e_cache;
 	unsigned int bucket;
 	struct list_head *l;
-	int error = -EBUSY, n;
+	int error = -EBUSY;
 
 	bucket = hash_long((unsigned long)bdev + (block & 0xffffffff), 
 			   cache->c_bucket_bits);
@@ -454,12 +403,9 @@ mb_cache_entry_insert(struct mb_cache_entry *ce, struct block_device *bdev,
 	ce->e_bdev = bdev;
 	ce->e_block = block;
 	list_add(&ce->e_block_list, &cache->c_block_hash[bucket]);
-	for (n=0; n<mb_cache_indexes(cache); n++) {
-		ce->e_indexes[n].o_key = keys[n];
-		bucket = hash_long(keys[n], cache->c_bucket_bits);
-		list_add(&ce->e_indexes[n].o_list,
-			 &cache->c_indexes_hash[n][bucket]);
-	}
+	ce->e_index.o_key = key;
+	bucket = hash_long(key, cache->c_bucket_bits);
+	list_add(&ce->e_index.o_list, &cache->c_index_hash[bucket]);
 	error = 0;
 out:
 	spin_unlock(&mb_cache_spinlock);
@@ -555,13 +501,12 @@ cleanup:
 
 static struct mb_cache_entry *
 __mb_cache_entry_find(struct list_head *l, struct list_head *head,
-		      int index, struct block_device *bdev, unsigned int key)
+		      struct block_device *bdev, unsigned int key)
 {
 	while (l != head) {
 		struct mb_cache_entry *ce -			list_entry(l, struct mb_cache_entry,
-			           e_indexes[index].o_list);
-		if (ce->e_bdev = bdev && ce->e_indexes[index].o_key = key) {
+			list_entry(l, struct mb_cache_entry, e_index.o_list);
+		if (ce->e_bdev = bdev && ce->e_index.o_key = key) {
 			DEFINE_WAIT(wait);
 
 			if (!list_empty(&ce->e_lru_list))
@@ -603,23 +548,20 @@ __mb_cache_entry_find(struct list_head *l, struct list_head *head,
  * returned cache entry is locked for shared access ("multiple readers").
  *
  * @cache: the cache to search
- * @index: the number of the additonal index to search (0<=index<indexes_count)
  * @bdev: the device the cache entry should belong to
  * @key: the key in the index
  */
 struct mb_cache_entry *
-mb_cache_entry_find_first(struct mb_cache *cache, int index,
-			  struct block_device *bdev, unsigned int key)
+mb_cache_entry_find_first(struct mb_cache *cache, struct block_device *bdev,
+			  unsigned int key)
 {
 	unsigned int bucket = hash_long(key, cache->c_bucket_bits);
 	struct list_head *l;
 	struct mb_cache_entry *ce;
 
-	mb_assert(index < mb_cache_indexes(cache));
 	spin_lock(&mb_cache_spinlock);
-	l = cache->c_indexes_hash[index][bucket].next;
-	ce = __mb_cache_entry_find(l, &cache->c_indexes_hash[index][bucket],
-	                           index, bdev, key);
+	l = cache->c_index_hash[bucket].next;
+	ce = __mb_cache_entry_find(l, &cache->c_index_hash[bucket], bdev, key);
 	spin_unlock(&mb_cache_spinlock);
 	return ce;
 }
@@ -640,12 +582,11 @@ mb_cache_entry_find_first(struct mb_cache *cache, int index,
  * }
  *
  * @prev: The previous match
- * @index: the number of the additonal index to search (0<=index<indexes_count)
  * @bdev: the device the cache entry should belong to
  * @key: the key in the index
  */
 struct mb_cache_entry *
-mb_cache_entry_find_next(struct mb_cache_entry *prev, int index,
+mb_cache_entry_find_next(struct mb_cache_entry *prev,
 			 struct block_device *bdev, unsigned int key)
 {
 	struct mb_cache *cache = prev->e_cache;
@@ -653,11 +594,9 @@ mb_cache_entry_find_next(struct mb_cache_entry *prev, int index,
 	struct list_head *l;
 	struct mb_cache_entry *ce;
 
-	mb_assert(index < mb_cache_indexes(cache));
 	spin_lock(&mb_cache_spinlock);
-	l = prev->e_indexes[index].o_list.next;
-	ce = __mb_cache_entry_find(l, &cache->c_indexes_hash[index][bucket],
-	                           index, bdev, key);
+	l = prev->e_index.o_list.next;
+	ce = __mb_cache_entry_find(l, &cache->c_index_hash[bucket], bdev, key);
 	__mb_cache_entry_release_unlock(prev);
 	return ce;
 }
diff --git a/include/linux/mbcache.h b/include/linux/mbcache.h
index a09b84e..54cbbac 100644
--- a/include/linux/mbcache.h
+++ b/include/linux/mbcache.h
@@ -4,9 +4,6 @@
   (C) 2001 by Andreas Gruenbacher, <a.gruenbacher@computer.org>
 */
 
-/* Hardwire the number of additional indexes */
-#define MB_CACHE_INDEXES_COUNT 1
-
 struct mb_cache_entry {
 	struct list_head		e_lru_list;
 	struct mb_cache			*e_cache;
@@ -18,17 +15,12 @@ struct mb_cache_entry {
 	struct {
 		struct list_head	o_list;
 		unsigned int		o_key;
-	} e_indexes[0];
-};
-
-struct mb_cache_op {
-	int (*free)(struct mb_cache_entry *, gfp_t);
+	} e_index;
 };
 
 /* Functions on caches */
 
-struct mb_cache * mb_cache_create(const char *, struct mb_cache_op *, size_t,
-				  int, int);
+struct mb_cache *mb_cache_create(const char *, int);
 void mb_cache_shrink(struct block_device *);
 void mb_cache_destroy(struct mb_cache *);
 
@@ -36,17 +28,15 @@ void mb_cache_destroy(struct mb_cache *);
 
 struct mb_cache_entry *mb_cache_entry_alloc(struct mb_cache *, gfp_t);
 int mb_cache_entry_insert(struct mb_cache_entry *, struct block_device *,
-			  sector_t, unsigned int[]);
+			  sector_t, unsigned int);
 void mb_cache_entry_release(struct mb_cache_entry *);
 void mb_cache_entry_free(struct mb_cache_entry *);
 struct mb_cache_entry *mb_cache_entry_get(struct mb_cache *,
 					  struct block_device *,
 					  sector_t);
-#if !defined(MB_CACHE_INDEXES_COUNT) || (MB_CACHE_INDEXES_COUNT > 0)
-struct mb_cache_entry *mb_cache_entry_find_first(struct mb_cache *cache, int,
+struct mb_cache_entry *mb_cache_entry_find_first(struct mb_cache *cache,
 						 struct block_device *, 
 						 unsigned int);
-struct mb_cache_entry *mb_cache_entry_find_next(struct mb_cache_entry *, int,
+struct mb_cache_entry *mb_cache_entry_find_next(struct mb_cache_entry *,
 						struct block_device *, 
 						unsigned int);
-#endif
-- 
1.7.2.rc3.57.g77b5b



^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
  2010-07-18  6:36       ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan Wang Sheng-Hui
@ 2010-07-19 18:39         ` Andreas Gruenbacher
  -1 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-19 18:39 UTC (permalink / raw)
  To: Wang Sheng-Hui
  Cc: Christoph Hellwig, Eric Sandeen, linux-fsdevel, viro, linux-mm,
	linux-ext4, kernel-janitors

On Sunday 18 July 2010 08:36:59 Wang Sheng-Hui wrote:
> I regenerated the patch. Please check it.

The logic for calculating how many objects to free is still wrong: 
mb_cache_shrink_fn returns the number of entries scaled by 
sysctl_vfs_cache_pressure / 100.  It should also scale nr_to_scan by the 
inverse of that.  The sysctl_vfs_cache_pressure == 0 case (never scale) may 
require special attention.

See dcache_shrinker() in fs/dcache.c.

Thanks,
Andreas

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-19 18:39         ` Andreas Gruenbacher
  0 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-19 18:39 UTC (permalink / raw)
  To: Wang Sheng-Hui
  Cc: Christoph Hellwig, Eric Sandeen, linux-fsdevel, viro, linux-mm,
	linux-ext4, kernel-janitors

On Sunday 18 July 2010 08:36:59 Wang Sheng-Hui wrote:
> I regenerated the patch. Please check it.

The logic for calculating how many objects to free is still wrong: 
mb_cache_shrink_fn returns the number of entries scaled by 
sysctl_vfs_cache_pressure / 100.  It should also scale nr_to_scan by the 
inverse of that.  The sysctl_vfs_cache_pressure = 0 case (never scale) may 
require special attention.

See dcache_shrinker() in fs/dcache.c.

Thanks,
Andreas

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
  2010-07-18  6:36       ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan Wang Sheng-Hui
@ 2010-07-19 18:40         ` Andreas Gruenbacher
  -1 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-19 18:40 UTC (permalink / raw)
  To: Wang Sheng-Hui
  Cc: Christoph Hellwig, Eric Sandeen, linux-fsdevel, viro, linux-mm,
	linux-ext4, kernel-janitors, a.gruenbacher

On Sunday 18 July 2010 08:36:59 Wang Sheng-Hui wrote:
> 于 2010-7-18 14:01, Christoph Hellwig 写道:
> > This should be using list_for_each_entry.

It would make sense to change this throughout the whole file.

Thanks,
Andreas

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-19 18:40         ` Andreas Gruenbacher
  0 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-19 18:40 UTC (permalink / raw)
  To: Wang Sheng-Hui
  Cc: Christoph Hellwig, Eric Sandeen, linux-fsdevel, viro, linux-mm,
	linux-ext4, kernel-janitors, a.gruenbacher

On Sunday 18 July 2010 08:36:59 Wang Sheng-Hui wrote:
> 于 2010-7-18 14:01, Christoph Hellwig 写道:
> > This should be using list_for_each_entry.

It would make sense to change this throughout the whole file.

Thanks,
Andreas

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
  2010-07-19 18:39         ` Andreas Gruenbacher
@ 2010-07-20  1:02           ` shenghui
  -1 siblings, 0 replies; 55+ messages in thread
From: shenghui @ 2010-07-20  1:02 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Christoph Hellwig, Eric Sandeen, linux-fsdevel, viro, linux-mm,
	linux-ext4, kernel-janitors

2010/7/20 Andreas Gruenbacher <agruen@suse.de>:
> On Sunday 18 July 2010 08:36:59 Wang Sheng-Hui wrote:
>> I regenerated the patch. Please check it.
>
> The logic for calculating how many objects to free is still wrong:
> mb_cache_shrink_fn returns the number of entries scaled by
> sysctl_vfs_cache_pressure / 100.  It should also scale nr_to_scan by the
> inverse of that.  The sysctl_vfs_cache_pressure == 0 case (never scale) may
> require special attention.
>
> See dcache_shrinker() in fs/dcache.c.
>
> Thanks,
> Andreas
>

Sorry, I haven't found any special attention on
sysctl_vfs_cache_pressure == 0 case or scale
nr_to_scan in fs/dcache.c

 900static int shrink_dcache_memory(int nr, gfp_t gfp_mask)
 901{
 902        if (nr) {
 903                if (!(gfp_mask & __GFP_FS))
 904                        return -1;
 905                prune_dcache(nr);
 906        }
 907        return (dentry_stat.nr_unused / 100) * sysctl_vfs_cache_pressure;
 908}




-- 


Thanks and Best Regards,
shenghui

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan >
@ 2010-07-20  1:02           ` shenghui
  0 siblings, 0 replies; 55+ messages in thread
From: shenghui @ 2010-07-20  1:02 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Christoph Hellwig, Eric Sandeen, linux-fsdevel, viro, linux-mm,
	linux-ext4, kernel-janitors

2010/7/20 Andreas Gruenbacher <agruen@suse.de>:
> On Sunday 18 July 2010 08:36:59 Wang Sheng-Hui wrote:
>> I regenerated the patch. Please check it.
>
> The logic for calculating how many objects to free is still wrong:
> mb_cache_shrink_fn returns the number of entries scaled by
> sysctl_vfs_cache_pressure / 100.  It should also scale nr_to_scan by the
> inverse of that.  The sysctl_vfs_cache_pressure = 0 case (never scale) may
> require special attention.
>
> See dcache_shrinker() in fs/dcache.c.
>
> Thanks,
> Andreas
>

Sorry, I haven't found any special attention on
sysctl_vfs_cache_pressure = 0 case or scale
nr_to_scan in fs/dcache.c

 900static int shrink_dcache_memory(int nr, gfp_t gfp_mask)
 901{
 902        if (nr) {
 903                if (!(gfp_mask & __GFP_FS))
 904                        return -1;
 905                prune_dcache(nr);
 906        }
 907        return (dentry_stat.nr_unused / 100) * sysctl_vfs_cache_pressure;
 908}




-- 


Thanks and Best Regards,
shenghui

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
  2010-07-20  1:02           ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > shenghui
  (?)
@ 2010-07-20  1:04             ` shenghui
  -1 siblings, 0 replies; 55+ messages in thread
From: shenghui @ 2010-07-20  1:04 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Christoph Hellwig, Eric Sandeen, linux-fsdevel, viro, linux-mm,
	linux-ext4, kernel-janitors

2010/7/20 shenghui <crosslonelyover@gmail.com>:
> 2010/7/20 Andreas Gruenbacher <agruen@suse.de>:
>
> Sorry, I haven't found any special attention on
> sysctl_vfs_cache_pressure == 0 case or scale
> nr_to_scan in fs/dcache.c
>
>  900static int shrink_dcache_memory(int nr, gfp_t gfp_mask)
>  901{
>  902        if (nr) {
>  903                if (!(gfp_mask & __GFP_FS))
>  904                        return -1;
>  905                prune_dcache(nr);
>  906        }
>  907        return (dentry_stat.nr_unused / 100) * sysctl_vfs_cache_pressure;
>  908}
>

And for sysctl_vfs_cache_pressure == 0 case, it's
enough to return 0 to indicate no cache entries left.



-- 


Thanks and Best Regards,
shenghui
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan >
@ 2010-07-20  1:04             ` shenghui
  0 siblings, 0 replies; 55+ messages in thread
From: shenghui @ 2010-07-20  1:04 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Christoph Hellwig, Eric Sandeen, linux-fsdevel, viro, linux-mm,
	linux-ext4, kernel-janitors

2010/7/20 shenghui <crosslonelyover@gmail.com>:
> 2010/7/20 Andreas Gruenbacher <agruen@suse.de>:
>
> Sorry, I haven't found any special attention on
> sysctl_vfs_cache_pressure = 0 case or scale
> nr_to_scan in fs/dcache.c
>
>  900static int shrink_dcache_memory(int nr, gfp_t gfp_mask)
>  901{
>  902        if (nr) {
>  903                if (!(gfp_mask & __GFP_FS))
>  904                        return -1;
>  905                prune_dcache(nr);
>  906        }
>  907        return (dentry_stat.nr_unused / 100) * sysctl_vfs_cache_pressure;
>  908}
>

And for sysctl_vfs_cache_pressure = 0 case, it's
enough to return 0 to indicate no cache entries left.



-- 


Thanks and Best Regards,
shenghui

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-20  1:04             ` shenghui
  0 siblings, 0 replies; 55+ messages in thread
From: shenghui @ 2010-07-20  1:04 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Christoph Hellwig, Eric Sandeen, linux-fsdevel, viro, linux-mm,
	linux-ext4, kernel-janitors

2010/7/20 shenghui <crosslonelyover@gmail.com>:
> 2010/7/20 Andreas Gruenbacher <agruen@suse.de>:
>
> Sorry, I haven't found any special attention on
> sysctl_vfs_cache_pressure == 0 case or scale
> nr_to_scan in fs/dcache.c
>
>  900static int shrink_dcache_memory(int nr, gfp_t gfp_mask)
>  901{
>  902        if (nr) {
>  903                if (!(gfp_mask & __GFP_FS))
>  904                        return -1;
>  905                prune_dcache(nr);
>  906        }
>  907        return (dentry_stat.nr_unused / 100) * sysctl_vfs_cache_pressure;
>  908}
>

And for sysctl_vfs_cache_pressure == 0 case, it's
enough to return 0 to indicate no cache entries left.



-- 


Thanks and Best Regards,
shenghui

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
  2010-07-19 18:39         ` Andreas Gruenbacher
@ 2010-07-20 15:13           ` Eric Sandeen
  -1 siblings, 0 replies; 55+ messages in thread
From: Eric Sandeen @ 2010-07-20 15:13 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Wang Sheng-Hui, Christoph Hellwig, linux-fsdevel, viro, linux-mm,
	linux-ext4, kernel-janitors

Andreas Gruenbacher wrote:
> On Sunday 18 July 2010 08:36:59 Wang Sheng-Hui wrote:
>> I regenerated the patch. Please check it.
> 
> The logic for calculating how many objects to free is still wrong: 
> mb_cache_shrink_fn returns the number of entries scaled by 
> sysctl_vfs_cache_pressure / 100.  It should also scale nr_to_scan by the 
> inverse of that.  The sysctl_vfs_cache_pressure == 0 case (never scale) may 
> require special attention.

I don't think that's right:

vfs_cache_pressure
------------------

Controls the tendency of the kernel to reclaim the memory which is used for
caching of directory and inode objects.

At the default value of vfs_cache_pressure=100 the kernel will attempt to
reclaim dentries and inodes at a "fair" rate with respect to pagecache and
swapcache reclaim.  Decreasing vfs_cache_pressure causes the kernel to prefer
to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
never reclaim dentries and inodes due to memory pressure and this can easily
lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
causes the kernel to prefer to reclaim dentries and inodes.


0 means "never reclaim," it doesn't mean "never scale."

As for nr_to_scan, after the first call, the shrinker has a scaled
version of the total count, so the requested nr_to_scan on the
next call is already scaled based on that.

I think the logic in the mbcache shrinker is fine.

-Eric

> See dcache_shrinker() in fs/dcache.c.



> 
> Thanks,
> Andreas

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan
@ 2010-07-20 15:13           ` Eric Sandeen
  0 siblings, 0 replies; 55+ messages in thread
From: Eric Sandeen @ 2010-07-20 15:13 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Wang Sheng-Hui, Christoph Hellwig, linux-fsdevel, viro, linux-mm,
	linux-ext4, kernel-janitors

Andreas Gruenbacher wrote:
> On Sunday 18 July 2010 08:36:59 Wang Sheng-Hui wrote:
>> I regenerated the patch. Please check it.
> 
> The logic for calculating how many objects to free is still wrong: 
> mb_cache_shrink_fn returns the number of entries scaled by 
> sysctl_vfs_cache_pressure / 100.  It should also scale nr_to_scan by the 
> inverse of that.  The sysctl_vfs_cache_pressure = 0 case (never scale) may 
> require special attention.

I don't think that's right:

vfs_cache_pressure
------------------

Controls the tendency of the kernel to reclaim the memory which is used for
caching of directory and inode objects.

At the default value of vfs_cache_pressure\x100 the kernel will attempt to
reclaim dentries and inodes at a "fair" rate with respect to pagecache and
swapcache reclaim.  Decreasing vfs_cache_pressure causes the kernel to prefer
to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
never reclaim dentries and inodes due to memory pressure and this can easily
lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
causes the kernel to prefer to reclaim dentries and inodes.


0 means "never reclaim," it doesn't mean "never scale."

As for nr_to_scan, after the first call, the shrinker has a scaled
version of the total count, so the requested nr_to_scan on the
next call is already scaled based on that.

I think the logic in the mbcache shrinker is fine.

-Eric

> See dcache_shrinker() in fs/dcache.c.



> 
> Thanks,
> Andreas


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
  2010-07-20 15:13           ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan Eric Sandeen
  (?)
@ 2010-07-20 16:34             ` Andreas Gruenbacher
  -1 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-20 16:34 UTC (permalink / raw)
  To: Eric Sandeen, Wang Sheng-Hui
  Cc: Christoph Hellwig, linux-fsdevel, viro, linux-mm, linux-ext4,
	kernel-janitors

On Tuesday 20 July 2010 17:13:56 Eric Sandeen wrote:
> I think the logic in the mbcache shrinker is fine.

Indeed yes, I got confused, sorry.

On Sunday 18 July 2010 08:36:59 Wang Sheng-Hui wrote:
> I regenerated the patch. Please check it.

Sheng-Hui, the mb_cache_lru_list list is now accessed without holding 
mb_cache_spinlock.

Thanks,
Andreas

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-20 16:34             ` Andreas Gruenbacher
  0 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-20 16:34 UTC (permalink / raw)
  To: Eric Sandeen, Wang Sheng-Hui
  Cc: Christoph Hellwig, linux-fsdevel, viro, linux-mm, linux-ext4,
	kernel-janitors

On Tuesday 20 July 2010 17:13:56 Eric Sandeen wrote:
> I think the logic in the mbcache shrinker is fine.

Indeed yes, I got confused, sorry.

On Sunday 18 July 2010 08:36:59 Wang Sheng-Hui wrote:
> I regenerated the patch. Please check it.

Sheng-Hui, the mb_cache_lru_list list is now accessed without holding 
mb_cache_spinlock.

Thanks,
Andreas

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-20 16:34             ` Andreas Gruenbacher
  0 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-20 16:34 UTC (permalink / raw)
  To: Eric Sandeen, Wang Sheng-Hui
  Cc: Christoph Hellwig, linux-fsdevel, viro, linux-mm, linux-ext4,
	kernel-janitors

On Tuesday 20 July 2010 17:13:56 Eric Sandeen wrote:
> I think the logic in the mbcache shrinker is fine.

Indeed yes, I got confused, sorry.

On Sunday 18 July 2010 08:36:59 Wang Sheng-Hui wrote:
> I regenerated the patch. Please check it.

Sheng-Hui, the mb_cache_lru_list list is now accessed without holding 
mb_cache_spinlock.

Thanks,
Andreas

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-21 10:53 ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-21 10:53 UTC (permalink / raw)
  To: sandeen, agruen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors

Sorry. regerated the patch, please check it.
I wrapped most code in single pair of spinlock ops for 2 reasons:
1) get spinlock 2 times seems time consuming
2) use single pair of spinlock ops can keep "count"
   consistent for the shrink operation. 2 pairs may
   get some new ces created by other processes. 



Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   24 ++++++++++++------------
 1 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..ee57aa3 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,15 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+
+	spin_lock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
 		goto out;
-	}
+
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -223,12 +217,18 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 		list_move_tail(&ce->e_lru_list, &free_list);
 		__mb_cache_entry_unhash(ce);
 	}
-	spin_unlock(&mb_cache_spinlock);
 	list_for_each_safe(l, ltmp, &free_list) {
 		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
 						   e_lru_list), gfp_mask);
 	}
 out:
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-21 10:53 ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-21 10:53 UTC (permalink / raw)
  To: sandeen, agruen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kerne

Sorry. regerated the patch, please check it.
I wrapped most code in single pair of spinlock ops for 2 reasons:
1) get spinlock 2 times seems time consuming
2) use single pair of spinlock ops can keep "count"
   consistent for the shrink operation. 2 pairs may
   get some new ces created by other processes. 



Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   24 ++++++++++++------------
 1 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..ee57aa3 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,15 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+
+	spin_lock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
 		goto out;
-	}
+
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -223,12 +217,18 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 		list_move_tail(&ce->e_lru_list, &free_list);
 		__mb_cache_entry_unhash(ce);
 	}
-	spin_unlock(&mb_cache_spinlock);
 	list_for_each_safe(l, ltmp, &free_list) {
 		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
 						   e_lru_list), gfp_mask);
 	}
 out:
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan >
@ 2010-07-21 10:53 ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-21 10:53 UTC (permalink / raw)
  To: sandeen, agruen, hch, linux-ext4, linux-fsdevel, linux-kernel, linux-mm

Sorry. regerated the patch, please check it.
I wrapped most code in single pair of spinlock ops for 2 reasons:
1) get spinlock 2 times seems time consuming
2) use single pair of spinlock ops can keep "count"
   consistent for the shrink operation. 2 pairs may
   get some new ces created by other processes. 



Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   24 ++++++++++++------------
 1 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..ee57aa3 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,15 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache -			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan = 0) {
-		spin_unlock(&mb_cache_spinlock);
+
+	spin_lock(&mb_cache_spinlock);
+	if (nr_to_scan = 0)
 		goto out;
-	}
+
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce  			list_entry(mb_cache_lru_list.next,
@@ -223,12 +217,18 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 		list_move_tail(&ce->e_lru_list, &free_list);
 		__mb_cache_entry_unhash(ce);
 	}
-	spin_unlock(&mb_cache_spinlock);
 	list_for_each_safe(l, ltmp, &free_list) {
 		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
 						   e_lru_list), gfp_mask);
 	}
 out:
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-21 10:53 ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-21 10:53 UTC (permalink / raw)
  To: sandeen, agruen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors

Sorry. regerated the patch, please check it.
I wrapped most code in single pair of spinlock ops for 2 reasons:
1) get spinlock 2 times seems time consuming
2) use single pair of spinlock ops can keep "count"
   consistent for the shrink operation. 2 pairs may
   get some new ces created by other processes. 



Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   24 ++++++++++++------------
 1 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..ee57aa3 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,15 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+
+	spin_lock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
 		goto out;
-	}
+
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -223,12 +217,18 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 		list_move_tail(&ce->e_lru_list, &free_list);
 		__mb_cache_entry_unhash(ce);
 	}
-	spin_unlock(&mb_cache_spinlock);
 	list_for_each_safe(l, ltmp, &free_list) {
 		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
 						   e_lru_list), gfp_mask);
 	}
 out:
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.6.3.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
  2010-07-21 10:53 ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Wang Sheng-Hui
  (?)
@ 2010-07-21 14:00   ` Eric Sandeen
  -1 siblings, 0 replies; 55+ messages in thread
From: Eric Sandeen @ 2010-07-21 14:00 UTC (permalink / raw)
  To: Wang Sheng-Hui
  Cc: agruen, hch, linux-ext4, linux-fsdevel, linux-kernel, linux-mm,
	kernel-janitors

Wang Sheng-Hui wrote:
> Sorry. regerated the patch, please check it.
> I wrapped most code in single pair of spinlock ops for 2 reasons:
> 1) get spinlock 2 times seems time consuming
> 2) use single pair of spinlock ops can keep "count"
>   consistent for the shrink operation. 2 pairs may
>   get some new ces created by other processes.
> 

Sorry, this patch appears to have whitespace cut & paste mangling.

More comments below.

> Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
> ---
> fs/mbcache.c |   24 ++++++++++++------------
> 1 files changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/fs/mbcache.c b/fs/mbcache.c
> index ec88ff3..ee57aa3 100644
> --- a/fs/mbcache.c
> +++ b/fs/mbcache.c
> @@ -201,21 +201,15 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
> {
>     LIST_HEAD(free_list);
>     struct list_head *l, *ltmp;
> +    struct mb_cache *cache;
>     int count = 0;
> 
> -    spin_lock(&mb_cache_spinlock);
> -    list_for_each(l, &mb_cache_list) {
> -        struct mb_cache *cache =
> -            list_entry(l, struct mb_cache, c_cache_list);
> -        mb_debug("cache %s (%d)", cache->c_name,
> -              atomic_read(&cache->c_entry_count));
> -        count += atomic_read(&cache->c_entry_count);
> -    }
>     mb_debug("trying to free %d entries", nr_to_scan);
> -    if (nr_to_scan == 0) {
> -        spin_unlock(&mb_cache_spinlock);
> +
> +    spin_lock(&mb_cache_spinlock);
> +    if (nr_to_scan == 0)
>         goto out;
> -    }
> +
>     while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
>         struct mb_cache_entry *ce =
>             list_entry(mb_cache_lru_list.next,
> @@ -223,12 +217,18 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
>         list_move_tail(&ce->e_lru_list, &free_list);
>         __mb_cache_entry_unhash(ce);
>     }
> -    spin_unlock(&mb_cache_spinlock);

you can't do this because

>     list_for_each_safe(l, ltmp, &free_list) {
>         __mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,

this takes the spinlock too and you'll deadlock.

Did you test this patch?

-Eric

>                            e_lru_list), gfp_mask);
>     }
> out:
> +    list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
> +        mb_debug("cache %s (%d)", cache->c_name,
> +              atomic_read(&cache->c_entry_count));
> +        count += atomic_read(&cache->c_entry_count);
> +    }
> +    spin_unlock(&mb_cache_spinlock);
> +
>     return (count / 100) * sysctl_vfs_cache_pressure;
> }
> 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-21 14:00   ` Eric Sandeen
  0 siblings, 0 replies; 55+ messages in thread
From: Eric Sandeen @ 2010-07-21 14:00 UTC (permalink / raw)
  To: Wang Sheng-Hui
  Cc: agruen, hch, linux-ext4, linux-fsdevel, linux-kernel, linux-mm,
	kernel-janitors

Wang Sheng-Hui wrote:
> Sorry. regerated the patch, please check it.
> I wrapped most code in single pair of spinlock ops for 2 reasons:
> 1) get spinlock 2 times seems time consuming
> 2) use single pair of spinlock ops can keep "count"
>   consistent for the shrink operation. 2 pairs may
>   get some new ces created by other processes.
> 

Sorry, this patch appears to have whitespace cut & paste mangling.

More comments below.

> Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
> ---
> fs/mbcache.c |   24 ++++++++++++------------
> 1 files changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/fs/mbcache.c b/fs/mbcache.c
> index ec88ff3..ee57aa3 100644
> --- a/fs/mbcache.c
> +++ b/fs/mbcache.c
> @@ -201,21 +201,15 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
> {
>     LIST_HEAD(free_list);
>     struct list_head *l, *ltmp;
> +    struct mb_cache *cache;
>     int count = 0;
> 
> -    spin_lock(&mb_cache_spinlock);
> -    list_for_each(l, &mb_cache_list) {
> -        struct mb_cache *cache =
> -            list_entry(l, struct mb_cache, c_cache_list);
> -        mb_debug("cache %s (%d)", cache->c_name,
> -              atomic_read(&cache->c_entry_count));
> -        count += atomic_read(&cache->c_entry_count);
> -    }
>     mb_debug("trying to free %d entries", nr_to_scan);
> -    if (nr_to_scan == 0) {
> -        spin_unlock(&mb_cache_spinlock);
> +
> +    spin_lock(&mb_cache_spinlock);
> +    if (nr_to_scan == 0)
>         goto out;
> -    }
> +
>     while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
>         struct mb_cache_entry *ce =
>             list_entry(mb_cache_lru_list.next,
> @@ -223,12 +217,18 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
>         list_move_tail(&ce->e_lru_list, &free_list);
>         __mb_cache_entry_unhash(ce);
>     }
> -    spin_unlock(&mb_cache_spinlock);

you can't do this because

>     list_for_each_safe(l, ltmp, &free_list) {
>         __mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,

this takes the spinlock too and you'll deadlock.

Did you test this patch?

-Eric

>                            e_lru_list), gfp_mask);
>     }
> out:
> +    list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
> +        mb_debug("cache %s (%d)", cache->c_name,
> +              atomic_read(&cache->c_entry_count));
> +        count += atomic_read(&cache->c_entry_count);
> +    }
> +    spin_unlock(&mb_cache_spinlock);
> +
>     return (count / 100) * sysctl_vfs_cache_pressure;
> }
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan
@ 2010-07-21 14:00   ` Eric Sandeen
  0 siblings, 0 replies; 55+ messages in thread
From: Eric Sandeen @ 2010-07-21 14:00 UTC (permalink / raw)
  To: Wang Sheng-Hui
  Cc: agruen, hch, linux-ext4, linux-fsdevel, linux-kernel, linux-mm,
	kernel-janitors

Wang Sheng-Hui wrote:
> Sorry. regerated the patch, please check it.
> I wrapped most code in single pair of spinlock ops for 2 reasons:
> 1) get spinlock 2 times seems time consuming
> 2) use single pair of spinlock ops can keep "count"
>   consistent for the shrink operation. 2 pairs may
>   get some new ces created by other processes.
> 

Sorry, this patch appears to have whitespace cut & paste mangling.

More comments below.

> Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
> ---
> fs/mbcache.c |   24 ++++++++++++------------
> 1 files changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/fs/mbcache.c b/fs/mbcache.c
> index ec88ff3..ee57aa3 100644
> --- a/fs/mbcache.c
> +++ b/fs/mbcache.c
> @@ -201,21 +201,15 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
> {
>     LIST_HEAD(free_list);
>     struct list_head *l, *ltmp;
> +    struct mb_cache *cache;
>     int count = 0;
> 
> -    spin_lock(&mb_cache_spinlock);
> -    list_for_each(l, &mb_cache_list) {
> -        struct mb_cache *cache > -            list_entry(l, struct mb_cache, c_cache_list);
> -        mb_debug("cache %s (%d)", cache->c_name,
> -              atomic_read(&cache->c_entry_count));
> -        count += atomic_read(&cache->c_entry_count);
> -    }
>     mb_debug("trying to free %d entries", nr_to_scan);
> -    if (nr_to_scan = 0) {
> -        spin_unlock(&mb_cache_spinlock);
> +
> +    spin_lock(&mb_cache_spinlock);
> +    if (nr_to_scan = 0)
>         goto out;
> -    }
> +
>     while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
>         struct mb_cache_entry *ce >             list_entry(mb_cache_lru_list.next,
> @@ -223,12 +217,18 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
>         list_move_tail(&ce->e_lru_list, &free_list);
>         __mb_cache_entry_unhash(ce);
>     }
> -    spin_unlock(&mb_cache_spinlock);

you can't do this because

>     list_for_each_safe(l, ltmp, &free_list) {
>         __mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,

this takes the spinlock too and you'll deadlock.

Did you test this patch?

-Eric

>                            e_lru_list), gfp_mask);
>     }
> out:
> +    list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
> +        mb_debug("cache %s (%d)", cache->c_name,
> +              atomic_read(&cache->c_entry_count));
> +        count += atomic_read(&cache->c_entry_count);
> +    }
> +    spin_unlock(&mb_cache_spinlock);
> +
>     return (count / 100) * sysctl_vfs_cache_pressure;
> }
> 


^ permalink raw reply	[flat|nested] 55+ messages in thread

* [PATCH 2/2] mbcache: fix shrinker function return value
  2010-07-21 14:00   ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Eric Sandeen
  (?)
@ 2010-07-21 17:44     ` Andreas Gruenbacher
  -1 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-21 17:44 UTC (permalink / raw)
  To: Alexander Viro
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

The shrinker function is supposed to return the number of cache
entries after shrinking, not before shrinking.  Fix that.

Based on a patch from Wang Sheng-Hui <crosslonelyover@gmail.com>.

Signed-off-by: Andreas Gruenbacher <agruen@suse.de>
---
 fs/mbcache.c |   27 ++++++++++-----------------
 1 files changed, 10 insertions(+), 17 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index 8a2cbd8..cf4e6cd 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -176,22 +176,12 @@ static int
 mb_cache_shrink_fn(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
-	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
+	struct mb_cache_entry *entry, *tmp;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
-		goto out;
-	}
+	spin_lock(&mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -199,12 +189,15 @@ mb_cache_shrink_fn(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask)
 		list_move_tail(&ce->e_lru_list, &free_list);
 		__mb_cache_entry_unhash(ce);
 	}
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
 	spin_unlock(&mb_cache_spinlock);
-	list_for_each_safe(l, ltmp, &free_list) {
-		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
-						   e_lru_list), gfp_mask);
+	list_for_each_entry_safe(entry, tmp, &free_list, e_lru_list) {
+		__mb_cache_entry_forget(entry, gfp_mask);
 	}
-out:
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.7.2.rc3.57.g77b5b


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 2/2] mbcache: fix shrinker function return value
@ 2010-07-21 17:44     ` Andreas Gruenbacher
  0 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-21 17:44 UTC (permalink / raw)
  To: Alexander Viro
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

The shrinker function is supposed to return the number of cache
entries after shrinking, not before shrinking.  Fix that.

Based on a patch from Wang Sheng-Hui <crosslonelyover@gmail.com>.

Signed-off-by: Andreas Gruenbacher <agruen@suse.de>
---
 fs/mbcache.c |   27 ++++++++++-----------------
 1 files changed, 10 insertions(+), 17 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index 8a2cbd8..cf4e6cd 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -176,22 +176,12 @@ static int
 mb_cache_shrink_fn(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
-	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
+	struct mb_cache_entry *entry, *tmp;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache -			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan = 0) {
-		spin_unlock(&mb_cache_spinlock);
-		goto out;
-	}
+	spin_lock(&mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce  			list_entry(mb_cache_lru_list.next,
@@ -199,12 +189,15 @@ mb_cache_shrink_fn(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask)
 		list_move_tail(&ce->e_lru_list, &free_list);
 		__mb_cache_entry_unhash(ce);
 	}
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
 	spin_unlock(&mb_cache_spinlock);
-	list_for_each_safe(l, ltmp, &free_list) {
-		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
-						   e_lru_list), gfp_mask);
+	list_for_each_entry_safe(entry, tmp, &free_list, e_lru_list) {
+		__mb_cache_entry_forget(entry, gfp_mask);
 	}
-out:
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.7.2.rc3.57.g77b5b


^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 2/2] mbcache: fix shrinker function return value
@ 2010-07-21 17:44     ` Andreas Gruenbacher
  0 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-21 17:44 UTC (permalink / raw)
  To: Alexander Viro
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

The shrinker function is supposed to return the number of cache
entries after shrinking, not before shrinking.  Fix that.

Based on a patch from Wang Sheng-Hui <crosslonelyover@gmail.com>.

Signed-off-by: Andreas Gruenbacher <agruen@suse.de>
---
 fs/mbcache.c |   27 ++++++++++-----------------
 1 files changed, 10 insertions(+), 17 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index 8a2cbd8..cf4e6cd 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -176,22 +176,12 @@ static int
 mb_cache_shrink_fn(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
-	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
+	struct mb_cache_entry *entry, *tmp;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
-		goto out;
-	}
+	spin_lock(&mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -199,12 +189,15 @@ mb_cache_shrink_fn(struct shrinker *shrink, int nr_to_scan, gfp_t gfp_mask)
 		list_move_tail(&ce->e_lru_list, &free_list);
 		__mb_cache_entry_unhash(ce);
 	}
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
 	spin_unlock(&mb_cache_spinlock);
-	list_for_each_safe(l, ltmp, &free_list) {
-		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
-						   e_lru_list), gfp_mask);
+	list_for_each_entry_safe(entry, tmp, &free_list, e_lru_list) {
+		__mb_cache_entry_forget(entry, gfp_mask);
 	}
-out:
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.7.2.rc3.57.g77b5b

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [PATCH 0/2] mbcache fixes
  2010-07-21 14:00   ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Eric Sandeen
  (?)
@ 2010-07-21 17:57     ` Andreas Gruenbacher
  -1 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-21 17:57 UTC (permalink / raw)
  To: Alexander Viro
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

Al,

here is an mbcache cleanup and then a fixed version of Shenghui's minor
shrinker function fix.  The patches have survived functional testing
here.

This seems slightly too much for kernel-janitors, so could you please
take the patches?

Thanks,
Andreas

Andreas Gruenbacher (2):
  mbcache: Remove unused features
  mbcache: fix shrinker function return value

 fs/ext2/xattr.c         |   12 ++--
 fs/ext3/xattr.c         |   12 ++--
 fs/ext4/xattr.c         |   12 ++--
 fs/mbcache.c            |  168 ++++++++++++++---------------------------------
 include/linux/mbcache.h |   20 ++----
 5 files changed, 70 insertions(+), 154 deletions(-)

-- 
1.7.2.rc3.57.g77b5b


^ permalink raw reply	[flat|nested] 55+ messages in thread

* [PATCH 0/2] mbcache fixes
@ 2010-07-21 17:57     ` Andreas Gruenbacher
  0 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-21 17:57 UTC (permalink / raw)
  To: Alexander Viro
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

Al,

here is an mbcache cleanup and then a fixed version of Shenghui's minor
shrinker function fix.  The patches have survived functional testing
here.

This seems slightly too much for kernel-janitors, so could you please
take the patches?

Thanks,
Andreas

Andreas Gruenbacher (2):
  mbcache: Remove unused features
  mbcache: fix shrinker function return value

 fs/ext2/xattr.c         |   12 ++--
 fs/ext3/xattr.c         |   12 ++--
 fs/ext4/xattr.c         |   12 ++--
 fs/mbcache.c            |  168 ++++++++++++++---------------------------------
 include/linux/mbcache.h |   20 ++----
 5 files changed, 70 insertions(+), 154 deletions(-)

-- 
1.7.2.rc3.57.g77b5b

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [PATCH 0/2] mbcache fixes
@ 2010-07-21 17:57     ` Andreas Gruenbacher
  0 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-21 17:57 UTC (permalink / raw)
  To: Alexander Viro
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

Al,

here is an mbcache cleanup and then a fixed version of Shenghui's minor
shrinker function fix.  The patches have survived functional testing
here.

This seems slightly too much for kernel-janitors, so could you please
take the patches?

Thanks,
Andreas

Andreas Gruenbacher (2):
  mbcache: Remove unused features
  mbcache: fix shrinker function return value

 fs/ext2/xattr.c         |   12 ++--
 fs/ext3/xattr.c         |   12 ++--
 fs/ext4/xattr.c         |   12 ++--
 fs/mbcache.c            |  168 ++++++++++++++---------------------------------
 include/linux/mbcache.h |   20 ++----
 5 files changed, 70 insertions(+), 154 deletions(-)

-- 
1.7.2.rc3.57.g77b5b


^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 1/2] mbcache: Remove unused features
  2010-07-19 16:19     ` Andreas Gruenbacher
  (?)
  (?)
@ 2010-07-21 23:18     ` Andreas Dilger
  2010-07-22  0:07       ` Andreas Gruenbacher
  -1 siblings, 1 reply; 55+ messages in thread
From: Andreas Dilger @ 2010-07-21 23:18 UTC (permalink / raw)
  To: Andreas Gruenbacher; +Cc: linux-ext4, linux-fsdevel

On 2010-07-19, at 10:19, Andreas Gruenbacher wrote:
> The mbcache code was written to support a variable number of indexes,
> but all the existing users use exactly one index.  Simplify to code to
> support only that case.
> 
> There are also no users of the cache entry free operation, and none of
> the users keep extra data in cache entries.  Remove those features as
> well.

Is it possible to allow mbcache to be disabled, either for the whole kernel, on a per-filesystem basis, or adaptively if the cache hit rate is very low (any of these is fine, not all of them).

The reason I ask is because under some uses mbcache is adding significant overhead for very little/no benefit.  If the xattr blocks are not shared, then every xattr is stored in a separate entry, and there is a single spinlock protecting the whole mbcache for all filesystems.

On systems with large memory for the buffer cache (6M+ buffer heads, 5M inodes in memory) there are very long hash chains to be searched, and this slows down filesystem performance dramatically.  We became aware of this problem because of NMIs triggering due to long spinlock hold times in mb_cache_entry_insert() on a server with 32GB of RAM.

To reproduce the problem, a simple test can be done with a bunch of kernel source trees (not hard-linked trees though, must be unpacked separately):

$ for i in linux-* ; do
    time du ${i}
  done

gives :
 8s for the first one
12s for the 10th
27s for the 25th
48s for the 50th
95s for the 100th
=> this is strictly linear

opreport -l vmlinux show :
 68.12% in mb_cache_entry_insert
 21.71% in mb_cache_entry_release
  4.27% in mb_cache_entry_free
  1.49% in mb_cache_entry_find_first
  0.82% in __mb_cache_entry_find

(see https://bugzilla.lustre.org/show_bug.cgi?id=22771 for full details)

I don't think fixing the mbcache to be more efficient (more buckets, more locks, etc) is really solving the problem which is that mbcache is adding overhead without value in these situations.

Attached is a patch that allows manually disabling mbcache on a per-filesystem basis with a mount option.  Better would be to automatically disable it if e.g. some hundreds or thousands of objects were inserted into the cache and there was < 1% cache hit rate.  That would help everyone, even those people who don't know they have a problem.

Cheers, Andreas






^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 0/2] mbcache fixes
  2010-07-21 17:57     ` Andreas Gruenbacher
  (?)
@ 2010-07-21 23:22       ` Al Viro
  -1 siblings, 0 replies; 55+ messages in thread
From: Al Viro @ 2010-07-21 23:22 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

On Wed, Jul 21, 2010 at 07:57:20PM +0200, Andreas Gruenbacher wrote:
> Al,
> 
> here is an mbcache cleanup and then a fixed version of Shenghui's minor
> shrinker function fix.  The patches have survived functional testing
> here.
> 
> This seems slightly too much for kernel-janitors, so could you please
> take the patches?

Done.

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 0/2] mbcache fixes
@ 2010-07-21 23:22       ` Al Viro
  0 siblings, 0 replies; 55+ messages in thread
From: Al Viro @ 2010-07-21 23:22 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

On Wed, Jul 21, 2010 at 07:57:20PM +0200, Andreas Gruenbacher wrote:
> Al,
> 
> here is an mbcache cleanup and then a fixed version of Shenghui's minor
> shrinker function fix.  The patches have survived functional testing
> here.
> 
> This seems slightly too much for kernel-janitors, so could you please
> take the patches?

Done.

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 0/2] mbcache fixes
@ 2010-07-21 23:22       ` Al Viro
  0 siblings, 0 replies; 55+ messages in thread
From: Al Viro @ 2010-07-21 23:22 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Eric Sandeen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kernel-janitors, Wang Sheng-Hui

On Wed, Jul 21, 2010 at 07:57:20PM +0200, Andreas Gruenbacher wrote:
> Al,
> 
> here is an mbcache cleanup and then a fixed version of Shenghui's minor
> shrinker function fix.  The patches have survived functional testing
> here.
> 
> This seems slightly too much for kernel-janitors, so could you please
> take the patches?

Done.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH 1/2] mbcache: Remove unused features
  2010-07-21 23:18     ` Andreas Dilger
@ 2010-07-22  0:07       ` Andreas Gruenbacher
  0 siblings, 0 replies; 55+ messages in thread
From: Andreas Gruenbacher @ 2010-07-22  0:07 UTC (permalink / raw)
  To: Andreas Dilger; +Cc: linux-ext4, linux-fsdevel

>From https://bugzilla.lustre.org/show_bug.cgi?id=22771#c24 :
> On our production system, the hash table contain 64 entries (6 bits) for a
> cache of 2307267 entries.
> A count in each list give a good load balance : number of entries vary
> between 35782 to 36496 while the optimal repartition is 2307267 / 64 =
> 36051.

Hehe, I like that sense of humor :)

On Thursday 22 July 2010 01:18:39 Andreas Dilger wrote:
> Is it possible to allow mbcache to be disabled, either for the whole
> kernel, on a per-filesystem basis, or adaptively if the cache hit rate is
> very low (any of these is fine, not all of them).

We could do that, but making the cache not degrade so badly would be a good 
idea in any case.  The number of buckets is currently fixed for ext[234] so it 
would make sense to either make that number dynamic or limit the maximum 
number of cache entries.  The latter will probably be good enough for most 
workloads.

> Attached is a patch that allows manually disabling mbcache on a
> per-filesystem basis with a mount option.

> I don't think fixing the mbcache to be more efficient (more buckets, more
> locks, etc) is really solving the problem which is that mbcache is adding
> overhead without value in these situations.

A mount option would be very ugly, but a kernel internal NO_MBCACHE flag 
sounds more acceptable to me.

> Better would be to
> automatically disable it if e.g. some hundreds or thousands of objects
> were inserted into the cache and there was < 1% cache hit rate.

This assumes that the workload won't change.

> That would help everyone, even those people who don't know they have a
> problem.

People who don't know they have a problem would also be helped by making the 
cache not degrade so badly, right?

Even better would be to use a more appropriate inode size, but you've pointed 
that out in the bug already.

Thanks,
Andreas

^ permalink raw reply	[flat|nested] 55+ messages in thread

* re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-22  0:54 ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-22  0:54 UTC (permalink / raw)
  To: Eric Sandeen, agruen, hch, linux-ext4, linux-kernel,
	linux-fsdevel, linux-mm, kernel-janitors

Sorry, missed that. Regerated and passed checkpatch.pl check. 
Please check it.


Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   23 ++++++++++++-----------
 1 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..603170e 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
 		goto out;
-	}
+
+	spin_lock(&mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -229,6 +222,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 						   e_lru_list), gfp_mask);
 	}
 out:
+	spin_lock(&mb_cache_spinlock);
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-22  0:54 ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-22  0:54 UTC (permalink / raw)
  To: Eric Sandeen, agruen, hch, linux-ext4, linux-kernel,
	linux-fsdevel, linux-mm

Sorry, missed that. Regerated and passed checkpatch.pl check. 
Please check it.


Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   23 ++++++++++++-----------
 1 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..603170e 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
 		goto out;
-	}
+
+	spin_lock(&mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -229,6 +222,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 						   e_lru_list), gfp_mask);
 	}
 out:
+	spin_lock(&mb_cache_spinlock);
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan
@ 2010-07-22  0:54 ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-22  0:54 UTC (permalink / raw)
  To: Eric Sandeen, agruen, hch, linux-ext4, linux-kernel,
	linux-fsdevel, linux-mm

Sorry, missed that. Regerated and passed checkpatch.pl check. 
Please check it.


Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   23 ++++++++++++-----------
 1 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..603170e 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache -			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan = 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan = 0)
 		goto out;
-	}
+
+	spin_lock(&mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce  			list_entry(mb_cache_lru_list.next,
@@ -229,6 +222,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 						   e_lru_list), gfp_mask);
 	}
 out:
+	spin_lock(&mb_cache_spinlock);
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-22  0:54 ` Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-22  0:54 UTC (permalink / raw)
  To: Eric Sandeen, agruen, hch, linux-ext4, linux-kernel,
	linux-fsdevel, linux-mm, kernel-janitors

Sorry, missed that. Regerated and passed checkpatch.pl check. 
Please check it.


Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   23 ++++++++++++-----------
 1 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..603170e 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
 		goto out;
-	}
+
+	spin_lock(&mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -229,6 +222,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 						   e_lru_list), gfp_mask);
 	}
 out:
+	spin_lock(&mb_cache_spinlock);
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.6.3.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan >  0
  2010-07-22  0:54 ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Wang Sheng-Hui
  (?)
  (?)
@ 2010-07-22  1:06   ` shenghui
  -1 siblings, 0 replies; 55+ messages in thread
From: shenghui @ 2010-07-22  1:06 UTC (permalink / raw)
  To: Eric Sandeen, agruen, hch, linux-ext4, linux-kernel,
	linux-fsdevel, linux-mm, kernel-janitors

Sorry to trouble you all & Thanks for your instructions!
I noticed that Andreas Gruenbacher has submitted patches
on mbcache.
Please ignore mine.


-- 


Thanks and Best Regards,
shenghui

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-22  1:06   ` shenghui
  0 siblings, 0 replies; 55+ messages in thread
From: shenghui @ 2010-07-22  1:06 UTC (permalink / raw)
  To: Eric Sandeen, agruen, hch, linux-ext4, linux-kernel,
	linux-fsdevel, linux-mm

Sorry to trouble you all & Thanks for your instructions!
I noticed that Andreas Gruenbacher has submitted patches
on mbcache.
Please ignore mine.


-- 


Thanks and Best Regards,
shenghui

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan >
@ 2010-07-22  1:06   ` shenghui
  0 siblings, 0 replies; 55+ messages in thread
From: shenghui @ 2010-07-22  1:06 UTC (permalink / raw)
  To: Eric Sandeen, agruen, hch, linux-ext4, linux-kernel,
	linux-fsdevel, linux-mm

Sorry to trouble you all & Thanks for your instructions!
I noticed that Andreas Gruenbacher has submitted patches
on mbcache.
Please ignore mine.


-- 


Thanks and Best Regards,
shenghui

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-22  1:06   ` shenghui
  0 siblings, 0 replies; 55+ messages in thread
From: shenghui @ 2010-07-22  1:06 UTC (permalink / raw)
  To: Eric Sandeen, agruen, hch, linux-ext4, linux-kernel,
	linux-fsdevel, linux-mm, kernel-janitors

Sorry to trouble you all & Thanks for your instructions!
I noticed that Andreas Gruenbacher has submitted patches
on mbcache.
Please ignore mine.


-- 


Thanks and Best Regards,
shenghui

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 55+ messages in thread

* re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-22  0:54 Wang Sheng-Hui
  0 siblings, 0 replies; 55+ messages in thread
From: Wang Sheng-Hui @ 2010-07-22  0:54 UTC (permalink / raw)
  To: Eric Sandeen, agruen, hch, linux-ext4, linux-kernel,
	linux-fsdevel, linux-mm

Sorry, missed that. Regerated and passed checkpatch.pl check. 
Please check it.


Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   23 ++++++++++++-----------
 1 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..603170e 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
 		goto out;
-	}
+
+	spin_lock(&mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -229,6 +222,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 						   e_lru_list), gfp_mask);
 	}
 out:
+	spin_lock(&mb_cache_spinlock);
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.6.3.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 55+ messages in thread

end of thread, other threads:[~2010-07-22  1:06 UTC | newest]

Thread overview: 55+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-07-22  0:54 [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Wang Sheng-Hui
2010-07-22  0:54 ` Wang Sheng-Hui
2010-07-22  0:54 ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan Wang Sheng-Hui
2010-07-22  0:54 ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Wang Sheng-Hui
2010-07-22  1:06 ` shenghui
2010-07-22  1:06   ` shenghui
2010-07-22  1:06   ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > shenghui
2010-07-22  1:06   ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 shenghui
  -- strict thread matches above, loose matches on Subject: below --
2010-07-22  0:54 Wang Sheng-Hui
2010-07-21 10:53 Wang Sheng-Hui
2010-07-21 10:53 ` Wang Sheng-Hui
2010-07-21 10:53 ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > Wang Sheng-Hui
2010-07-21 10:53 ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Wang Sheng-Hui
2010-07-21 14:00 ` Eric Sandeen
2010-07-21 14:00   ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan Eric Sandeen
2010-07-21 14:00   ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Eric Sandeen
2010-07-19 16:19   ` [PATCH 1/2] mbcache: Remove unused features Andreas Gruenbacher
2010-07-19 16:19     ` Andreas Gruenbacher
2010-07-19 16:19     ` Andreas Gruenbacher
2010-07-21 23:18     ` Andreas Dilger
2010-07-22  0:07       ` Andreas Gruenbacher
2010-07-21 17:44   ` [PATCH 2/2] mbcache: fix shrinker function return value Andreas Gruenbacher
2010-07-21 17:44     ` Andreas Gruenbacher
2010-07-21 17:44     ` Andreas Gruenbacher
2010-07-21 17:57   ` [PATCH 0/2] mbcache fixes Andreas Gruenbacher
2010-07-21 17:57     ` Andreas Gruenbacher
2010-07-21 17:57     ` Andreas Gruenbacher
2010-07-21 23:22     ` Al Viro
2010-07-21 23:22       ` Al Viro
2010-07-21 23:22       ` Al Viro
2010-07-18  1:01 [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Wang Sheng-Hui
2010-07-18  1:01 ` Wang Sheng-Hui
2010-07-18  1:01 ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > Wang Sheng-Hui
2010-07-18  4:06 ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Eric Sandeen
2010-07-18  4:06   ` Eric Sandeen
2010-07-18  4:06   ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan Eric Sandeen
2010-07-18  6:01   ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Christoph Hellwig
2010-07-18  6:01     ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan Christoph Hellwig
2010-07-18  6:36     ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Wang Sheng-Hui
2010-07-18  6:36       ` Wang Sheng-Hui
2010-07-18  6:36       ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan Wang Sheng-Hui
2010-07-19 18:39       ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Andreas Gruenbacher
2010-07-19 18:39         ` Andreas Gruenbacher
2010-07-20  1:02         ` shenghui
2010-07-20  1:02           ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > shenghui
2010-07-20  1:04           ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 shenghui
2010-07-20  1:04             ` shenghui
2010-07-20  1:04             ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > shenghui
2010-07-20 15:13         ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Eric Sandeen
2010-07-20 15:13           ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan Eric Sandeen
2010-07-20 16:34           ` [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Andreas Gruenbacher
2010-07-20 16:34             ` Andreas Gruenbacher
2010-07-20 16:34             ` Andreas Gruenbacher
2010-07-19 18:40       ` Andreas Gruenbacher
2010-07-19 18:40         ` Andreas Gruenbacher

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.